Test Report: QEMU_macOS 19347

                    
                      0e08cf035d2b49b1a7844497e1c3c2e2e59b4b36:2024-07-29:35562
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.92
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.91
55 TestCertOptions 10.14
56 TestCertExpiration 195.38
57 TestDockerFlags 10.42
58 TestForceSystemdFlag 10.15
59 TestForceSystemdEnv 11.34
104 TestFunctional/parallel/ServiceCmdConnect 32.21
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104
178 TestMultiControlPlane/serial/RestartSecondaryNode 209.27
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.42
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
183 TestMultiControlPlane/serial/StopCluster 202.07
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 10.08
193 TestJSONOutput/start/Command 9.93
199 TestJSONOutput/pause/Command 0.07
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.2
225 TestMountStart/serial/StartWithMountFirst 10.03
228 TestMultiNode/serial/FreshStart2Nodes 10.16
229 TestMultiNode/serial/DeployApp2Nodes 108.01
230 TestMultiNode/serial/PingHostFrom2Pods 0.08
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.14
236 TestMultiNode/serial/StartAfterStop 53.83
237 TestMultiNode/serial/RestartKeepsNodes 8.25
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.59
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20.11
245 TestPreload 10.14
247 TestScheduledStopUnix 10.09
248 TestSkaffold 12.09
251 TestRunningBinaryUpgrade 593.17
253 TestKubernetesUpgrade 19
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.81
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.58
269 TestStoppedBinaryUpgrade/Upgrade 571.5
271 TestPause/serial/Start 9.86
281 TestNoKubernetes/serial/StartWithK8s 10.02
282 TestNoKubernetes/serial/StartWithStopK8s 5.3
283 TestNoKubernetes/serial/Start 5.32
287 TestNoKubernetes/serial/StartNoArgs 5.33
289 TestNetworkPlugins/group/kindnet/Start 9.84
290 TestNetworkPlugins/group/auto/Start 9.94
291 TestNetworkPlugins/group/flannel/Start 9.92
292 TestNetworkPlugins/group/enable-default-cni/Start 9.8
293 TestNetworkPlugins/group/bridge/Start 9.9
294 TestNetworkPlugins/group/kubenet/Start 9.81
295 TestNetworkPlugins/group/custom-flannel/Start 9.88
296 TestNetworkPlugins/group/calico/Start 9.8
297 TestNetworkPlugins/group/false/Start 9.72
300 TestStartStop/group/old-k8s-version/serial/FirstStart 9.78
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 9.86
312 TestStartStop/group/no-preload/serial/DeployApp 0.09
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/embed-certs/serial/FirstStart 9.96
318 TestStartStop/group/no-preload/serial/SecondStart 6.03
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
322 TestStartStop/group/no-preload/serial/Pause 0.1
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.47
325 TestStartStop/group/embed-certs/serial/DeployApp 0.1
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
329 TestStartStop/group/embed-certs/serial/SecondStart 7.41
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
335 TestStartStop/group/embed-certs/serial/Pause 0.11
338 TestStartStop/group/newest-cni/serial/FirstStart 9.94
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.81
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.25
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (17.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-418000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-418000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.91816275s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eab38e05-600c-44cd-9d04-f2f1cf3d5ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-418000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2c15f2f-4789-421d-b89b-6ef75a4fdcee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19347"}}
	{"specversion":"1.0","id":"c28b61a7-baec-4168-a1aa-7abf4184df70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig"}}
	{"specversion":"1.0","id":"bbbbe1df-a333-42b6-931a-2cbbca3088e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"60d78074-6dd9-47f3-949f-b465a72f8375","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"228aed9b-a752-488f-af78-56688b319144","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube"}}
	{"specversion":"1.0","id":"46663e1d-5c39-4fc6-a1d6-b1b5dfd3adc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"68c192bb-4ccc-46ab-9b58-b0d0b0a34002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a3aec7f-f85c-4c16-82c2-1c3dd5169318","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a8828f61-03a5-445a-9894-cbe6d600e5af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"83f84f46-8190-432a-89c5-26c395b3233b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-418000\" primary control-plane node in \"download-only-418000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d03ba491-b60c-439a-b92a-4118c72e1bc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"79829b0f-5328-497c-a852-8c81bf7f3023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60] Decompressors:map[bz2:0x14000167f90 gz:0x14000167f98 tar:0x14000167f40 tar.bz2:0x14000167f50 tar.gz:0x14000167f60 tar.xz:0x14000167f70 tar.zst:0x14000167f80 tbz2:0x14000167f50 tgz:0x140
00167f60 txz:0x14000167f70 tzst:0x14000167f80 xz:0x14000167fa0 zip:0x14000167fb0 zst:0x14000167fa8] Getters:map[file:0x14000a13760 http:0x140007fc190 https:0x140007fc1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"8bf2038b-fa8e-4f23-8dba-5a123f5ffd42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:02:49.994403    1392 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:02:49.994559    1392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:02:49.994562    1392 out.go:304] Setting ErrFile to fd 2...
	I0729 16:02:49.994564    1392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:02:49.994711    1392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	W0729 16:02:49.994795    1392 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19347-923/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19347-923/.minikube/config/config.json: no such file or directory
	I0729 16:02:49.996103    1392 out.go:298] Setting JSON to true
	I0729 16:02:50.013134    1392 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":133,"bootTime":1722294037,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:02:50.013198    1392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:02:50.019081    1392 out.go:97] [download-only-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:02:50.019280    1392 notify.go:220] Checking for updates...
	W0729 16:02:50.019304    1392 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 16:02:50.021980    1392 out.go:169] MINIKUBE_LOCATION=19347
	I0729 16:02:50.024967    1392 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:02:50.030042    1392 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:02:50.033031    1392 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:02:50.035958    1392 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	W0729 16:02:50.042061    1392 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:02:50.042291    1392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:02:50.046991    1392 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:02:50.047009    1392 start.go:297] selected driver: qemu2
	I0729 16:02:50.047013    1392 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:02:50.047076    1392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:02:50.051037    1392 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:02:50.056643    1392 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:02:50.056721    1392 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:02:50.056779    1392 cni.go:84] Creating CNI manager for ""
	I0729 16:02:50.056796    1392 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:02:50.056846    1392 start.go:340] cluster config:
	{Name:download-only-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:02:50.061996    1392 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:02:50.065979    1392 out.go:97] Downloading VM boot image ...
	I0729 16:02:50.065999    1392 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 16:02:57.080741    1392 out.go:97] Starting "download-only-418000" primary control-plane node in "download-only-418000" cluster
	I0729 16:02:57.080760    1392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:02:57.141059    1392 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:02:57.141067    1392 cache.go:56] Caching tarball of preloaded images
	I0729 16:02:57.141214    1392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:02:57.144741    1392 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 16:02:57.144747    1392 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:02:57.221290    1392 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:03:06.760899    1392 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:06.761057    1392 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:07.456998    1392 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:03:07.457211    1392 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/download-only-418000/config.json ...
	I0729 16:03:07.457231    1392 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/download-only-418000/config.json: {Name:mk72b5783e5430eb4f6ffdc2d7a3ce3666a8e0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:03:07.457454    1392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:03:07.457652    1392 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 16:03:07.843287    1392 out.go:169] 
	W0729 16:03:07.849246    1392 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60] Decompressors:map[bz2:0x14000167f90 gz:0x14000167f98 tar:0x14000167f40 tar.bz2:0x14000167f50 tar.gz:0x14000167f60 tar.xz:0x14000167f70 tar.zst:0x14000167f80 tbz2:0x14000167f50 tgz:0x14000167f60 txz:0x14000167f70 tzst:0x14000167f80 xz:0x14000167fa0 zip:0x14000167fb0 zst:0x14000167fa8] Getters:map[file:0x14000a13760 http:0x140007fc190 https:0x140007fc1e0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 16:03:07.849270    1392 out_reason.go:110] 
	W0729 16:03:07.855020    1392 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:03:07.859184    1392 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-418000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (17.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-451000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-451000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.762968583s)

                                                
                                                
-- stdout --
	* [offline-docker-451000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-451000" primary control-plane node in "offline-docker-451000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-451000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:41:08.450113    4081 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:41:08.450253    4081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:08.450257    4081 out.go:304] Setting ErrFile to fd 2...
	I0729 16:41:08.450259    4081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:08.450394    4081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:41:08.451570    4081 out.go:298] Setting JSON to false
	I0729 16:41:08.468984    4081 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2431,"bootTime":1722294037,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:41:08.469066    4081 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:41:08.475546    4081 out.go:177] * [offline-docker-451000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:41:08.483506    4081 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:41:08.483538    4081 notify.go:220] Checking for updates...
	I0729 16:41:08.490431    4081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:41:08.493428    4081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:41:08.496418    4081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:41:08.499436    4081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:41:08.502388    4081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:41:08.505808    4081 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:41:08.505866    4081 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:41:08.509379    4081 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:41:08.516370    4081 start.go:297] selected driver: qemu2
	I0729 16:41:08.516384    4081 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:41:08.516392    4081 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:41:08.518421    4081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:41:08.521347    4081 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:41:08.524455    4081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:41:08.524485    4081 cni.go:84] Creating CNI manager for ""
	I0729 16:41:08.524491    4081 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:41:08.524494    4081 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:41:08.524524    4081 start.go:340] cluster config:
	{Name:offline-docker-451000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:41:08.528260    4081 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:41:08.533384    4081 out.go:177] * Starting "offline-docker-451000" primary control-plane node in "offline-docker-451000" cluster
	I0729 16:41:08.537431    4081 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:41:08.537459    4081 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:41:08.537470    4081 cache.go:56] Caching tarball of preloaded images
	I0729 16:41:08.537542    4081 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:41:08.537547    4081 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:41:08.537611    4081 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/offline-docker-451000/config.json ...
	I0729 16:41:08.537621    4081 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/offline-docker-451000/config.json: {Name:mkfa78dae1bdbfbaec4c854d32b8464c09663658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:08.537899    4081 start.go:360] acquireMachinesLock for offline-docker-451000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:08.537933    4081 start.go:364] duration metric: took 25.584µs to acquireMachinesLock for "offline-docker-451000"
	I0729 16:41:08.537944    4081 start.go:93] Provisioning new machine with config: &{Name:offline-docker-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:08.537968    4081 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:08.542397    4081 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:08.558265    4081 start.go:159] libmachine.API.Create for "offline-docker-451000" (driver="qemu2")
	I0729 16:41:08.558292    4081 client.go:168] LocalClient.Create starting
	I0729 16:41:08.558373    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:08.558417    4081 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:08.558426    4081 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:08.558468    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:08.558490    4081 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:08.558499    4081 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:08.558902    4081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:08.712958    4081 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:08.809402    4081 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:08.809411    4081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:08.809883    4081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2
	I0729 16:41:08.819617    4081 main.go:141] libmachine: STDOUT: 
	I0729 16:41:08.819639    4081 main.go:141] libmachine: STDERR: 
	I0729 16:41:08.819696    4081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2 +20000M
	I0729 16:41:08.828590    4081 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:08.828610    4081 main.go:141] libmachine: STDERR: 
	I0729 16:41:08.828632    4081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2
	I0729 16:41:08.828636    4081 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:08.828647    4081 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:08.828673    4081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:ce:38:26:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2
	I0729 16:41:08.830302    4081 main.go:141] libmachine: STDOUT: 
	I0729 16:41:08.830317    4081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:08.830335    4081 client.go:171] duration metric: took 272.042375ms to LocalClient.Create
	I0729 16:41:10.832460    4081 start.go:128] duration metric: took 2.294503375s to createHost
	I0729 16:41:10.832517    4081 start.go:83] releasing machines lock for "offline-docker-451000", held for 2.294612792s
	W0729 16:41:10.832556    4081 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:10.838590    4081 out.go:177] * Deleting "offline-docker-451000" in qemu2 ...
	W0729 16:41:10.852027    4081 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:10.852040    4081 start.go:729] Will try again in 5 seconds ...
	I0729 16:41:15.854069    4081 start.go:360] acquireMachinesLock for offline-docker-451000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:15.854209    4081 start.go:364] duration metric: took 104.25µs to acquireMachinesLock for "offline-docker-451000"
	I0729 16:41:15.854243    4081 start.go:93] Provisioning new machine with config: &{Name:offline-docker-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:15.854305    4081 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:15.867604    4081 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:15.883506    4081 start.go:159] libmachine.API.Create for "offline-docker-451000" (driver="qemu2")
	I0729 16:41:15.883616    4081 client.go:168] LocalClient.Create starting
	I0729 16:41:15.883694    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:15.883732    4081 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:15.883742    4081 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:15.883777    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:15.883801    4081 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:15.883808    4081 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:15.884122    4081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:16.044521    4081 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:16.108364    4081 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:16.108371    4081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:16.117688    4081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2
	I0729 16:41:16.132532    4081 main.go:141] libmachine: STDOUT: 
	I0729 16:41:16.132549    4081 main.go:141] libmachine: STDERR: 
	I0729 16:41:16.132591    4081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2 +20000M
	I0729 16:41:16.140627    4081 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:16.140645    4081 main.go:141] libmachine: STDERR: 
	I0729 16:41:16.140657    4081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2
	I0729 16:41:16.140662    4081 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:16.140668    4081 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:16.140699    4081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d0:fe:9b:02:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/offline-docker-451000/disk.qcow2
	I0729 16:41:16.142277    4081 main.go:141] libmachine: STDOUT: 
	I0729 16:41:16.142292    4081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:16.142305    4081 client.go:171] duration metric: took 258.689166ms to LocalClient.Create
	I0729 16:41:18.144464    4081 start.go:128] duration metric: took 2.29016825s to createHost
	I0729 16:41:18.144509    4081 start.go:83] releasing machines lock for "offline-docker-451000", held for 2.290323291s
	W0729 16:41:18.144888    4081 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-451000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-451000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:18.154502    4081 out.go:177] 
	W0729 16:41:18.158495    4081 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:41:18.158548    4081 out.go:239] * 
	* 
	W0729 16:41:18.161507    4081 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:41:18.169413    4081 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-451000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 16:41:18.185115 -0700 PDT m=+2308.291229960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-451000 -n offline-docker-451000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-451000 -n offline-docker-451000: exit status 7 (69.864667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-451000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-451000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-451000
--- FAIL: TestOffline (9.91s)

                                                
                                    
x
+
TestCertOptions (10.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.887722042s)

                                                
                                                
-- stdout --
	* [cert-options-940000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-940000" primary control-plane node in "cert-options-940000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-940000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-940000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-940000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-940000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.3175ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-940000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-940000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-940000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-940000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-940000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (38.528833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-940000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-940000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-940000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 16:41:50.128858 -0700 PDT m=+2340.235429918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-940000 -n cert-options-940000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-940000 -n cert-options-940000: exit status 7 (29.027333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-940000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-940000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-940000
--- FAIL: TestCertOptions (10.14s)

                                                
                                    
x
+
TestCertExpiration (195.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-792000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-792000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.029319792s)

                                                
                                                
-- stdout --
	* [cert-expiration-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-792000" primary control-plane node in "cert-expiration-792000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-792000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-792000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-792000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-792000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-792000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.216609375s)

                                                
                                                
-- stdout --
	* [cert-expiration-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-792000" primary control-plane node in "cert-expiration-792000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-792000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-792000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-792000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-792000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-792000" primary control-plane node in "cert-expiration-792000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-792000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-792000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-792000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 16:44:50.106542 -0700 PDT m=+2520.215688543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-792000 -n cert-expiration-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-792000 -n cert-expiration-792000: exit status 7 (54.05025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-792000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-792000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-792000
--- FAIL: TestCertExpiration (195.38s)

                                                
                                    
x
+
TestDockerFlags (10.42s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-935000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-935000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.18819575s)

                                                
                                                
-- stdout --
	* [docker-flags-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-935000" primary control-plane node in "docker-flags-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:41:29.699745    4276 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:41:29.699959    4276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:29.699962    4276 out.go:304] Setting ErrFile to fd 2...
	I0729 16:41:29.699964    4276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:29.700104    4276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:41:29.701193    4276 out.go:298] Setting JSON to false
	I0729 16:41:29.717090    4276 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2452,"bootTime":1722294037,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:41:29.717164    4276 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:41:29.722793    4276 out.go:177] * [docker-flags-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:41:29.729954    4276 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:41:29.730004    4276 notify.go:220] Checking for updates...
	I0729 16:41:29.735932    4276 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:41:29.738944    4276 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:41:29.740225    4276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:41:29.742972    4276 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:41:29.745940    4276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:41:29.749317    4276 config.go:182] Loaded profile config "force-systemd-flag-890000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:41:29.749386    4276 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:41:29.749429    4276 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:41:29.753936    4276 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:41:29.760960    4276 start.go:297] selected driver: qemu2
	I0729 16:41:29.760966    4276 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:41:29.760972    4276 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:41:29.763104    4276 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:41:29.765939    4276 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:41:29.769080    4276 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 16:41:29.769106    4276 cni.go:84] Creating CNI manager for ""
	I0729 16:41:29.769113    4276 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:41:29.769117    4276 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:41:29.769143    4276 start.go:340] cluster config:
	{Name:docker-flags-935000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:41:29.772809    4276 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:41:29.779908    4276 out.go:177] * Starting "docker-flags-935000" primary control-plane node in "docker-flags-935000" cluster
	I0729 16:41:29.783925    4276 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:41:29.783943    4276 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:41:29.783959    4276 cache.go:56] Caching tarball of preloaded images
	I0729 16:41:29.784027    4276 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:41:29.784034    4276 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:41:29.784098    4276 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/docker-flags-935000/config.json ...
	I0729 16:41:29.784109    4276 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/docker-flags-935000/config.json: {Name:mkfe9d7a3db9457adfd7a02edbd9f1eeece0bb59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:29.784446    4276 start.go:360] acquireMachinesLock for docker-flags-935000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:29.784479    4276 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "docker-flags-935000"
	I0729 16:41:29.784490    4276 start.go:93] Provisioning new machine with config: &{Name:docker-flags-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:29.784517    4276 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:29.787962    4276 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:29.805145    4276 start.go:159] libmachine.API.Create for "docker-flags-935000" (driver="qemu2")
	I0729 16:41:29.805175    4276 client.go:168] LocalClient.Create starting
	I0729 16:41:29.805238    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:29.805269    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:29.805281    4276 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:29.805316    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:29.805339    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:29.805344    4276 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:29.805818    4276 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:29.960232    4276 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:30.016091    4276 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:30.016096    4276 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:30.016304    4276 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2
	I0729 16:41:30.025416    4276 main.go:141] libmachine: STDOUT: 
	I0729 16:41:30.025433    4276 main.go:141] libmachine: STDERR: 
	I0729 16:41:30.025477    4276 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2 +20000M
	I0729 16:41:30.033276    4276 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:30.033301    4276 main.go:141] libmachine: STDERR: 
	I0729 16:41:30.033315    4276 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2
	I0729 16:41:30.033320    4276 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:30.033331    4276 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:30.033364    4276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:dc:6f:67:7e:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2
	I0729 16:41:30.035026    4276 main.go:141] libmachine: STDOUT: 
	I0729 16:41:30.035040    4276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:30.035061    4276 client.go:171] duration metric: took 229.881958ms to LocalClient.Create
	I0729 16:41:32.037258    4276 start.go:128] duration metric: took 2.25273975s to createHost
	I0729 16:41:32.037356    4276 start.go:83] releasing machines lock for "docker-flags-935000", held for 2.252887375s
	W0729 16:41:32.037544    4276 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:32.055938    4276 out.go:177] * Deleting "docker-flags-935000" in qemu2 ...
	W0729 16:41:32.082534    4276 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:32.082579    4276 start.go:729] Will try again in 5 seconds ...
	I0729 16:41:37.084730    4276 start.go:360] acquireMachinesLock for docker-flags-935000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:37.348069    4276 start.go:364] duration metric: took 263.225708ms to acquireMachinesLock for "docker-flags-935000"
	I0729 16:41:37.348265    4276 start.go:93] Provisioning new machine with config: &{Name:docker-flags-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:37.348657    4276 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:37.362264    4276 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:37.410934    4276 start.go:159] libmachine.API.Create for "docker-flags-935000" (driver="qemu2")
	I0729 16:41:37.410982    4276 client.go:168] LocalClient.Create starting
	I0729 16:41:37.411103    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:37.411169    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:37.411186    4276 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:37.411251    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:37.411297    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:37.411312    4276 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:37.411901    4276 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:37.576479    4276 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:37.790226    4276 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:37.790235    4276 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:37.790479    4276 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2
	I0729 16:41:37.800298    4276 main.go:141] libmachine: STDOUT: 
	I0729 16:41:37.800315    4276 main.go:141] libmachine: STDERR: 
	I0729 16:41:37.800366    4276 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2 +20000M
	I0729 16:41:37.808186    4276 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:37.808200    4276 main.go:141] libmachine: STDERR: 
	I0729 16:41:37.808211    4276 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2
	I0729 16:41:37.808216    4276 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:37.808230    4276 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:37.808264    4276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:45:e3:b3:ce:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/docker-flags-935000/disk.qcow2
	I0729 16:41:37.809853    4276 main.go:141] libmachine: STDOUT: 
	I0729 16:41:37.809866    4276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:37.809878    4276 client.go:171] duration metric: took 398.893875ms to LocalClient.Create
	I0729 16:41:39.812029    4276 start.go:128] duration metric: took 2.463378708s to createHost
	I0729 16:41:39.812159    4276 start.go:83] releasing machines lock for "docker-flags-935000", held for 2.463987959s
	W0729 16:41:39.812628    4276 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:39.831099    4276 out.go:177] 
	W0729 16:41:39.836071    4276 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:41:39.836127    4276 out.go:239] * 
	* 
	W0729 16:41:39.839085    4276 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:41:39.847030    4276 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-935000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-935000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-935000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.840084ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-935000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-935000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-935000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-935000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-935000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-935000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-935000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-935000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.628917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-935000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-935000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-935000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-935000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-935000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 16:41:39.983922 -0700 PDT m=+2330.090348376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-935000 -n docker-flags-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-935000 -n docker-flags-935000: exit status 7 (28.159208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-935000
--- FAIL: TestDockerFlags (10.42s)

                                                
                                    
x
+
TestForceSystemdFlag (10.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-890000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-890000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.962521125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-890000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-890000" primary control-plane node in "force-systemd-flag-890000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-890000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:41:24.734267    4253 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:41:24.734391    4253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:24.734395    4253 out.go:304] Setting ErrFile to fd 2...
	I0729 16:41:24.734397    4253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:24.734548    4253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:41:24.735688    4253 out.go:298] Setting JSON to false
	I0729 16:41:24.751672    4253 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2447,"bootTime":1722294037,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:41:24.751740    4253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:41:24.758642    4253 out.go:177] * [force-systemd-flag-890000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:41:24.765589    4253 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:41:24.765624    4253 notify.go:220] Checking for updates...
	I0729 16:41:24.772555    4253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:41:24.776608    4253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:41:24.779632    4253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:41:24.782608    4253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:41:24.785639    4253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:41:24.788923    4253 config.go:182] Loaded profile config "force-systemd-env-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:41:24.788992    4253 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:41:24.789038    4253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:41:24.793586    4253 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:41:24.800561    4253 start.go:297] selected driver: qemu2
	I0729 16:41:24.800569    4253 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:41:24.800575    4253 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:41:24.802882    4253 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:41:24.805582    4253 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:41:24.808735    4253 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:41:24.808775    4253 cni.go:84] Creating CNI manager for ""
	I0729 16:41:24.808785    4253 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:41:24.808791    4253 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:41:24.808831    4253 start.go:340] cluster config:
	{Name:force-systemd-flag-890000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-890000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:41:24.812577    4253 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:41:24.820614    4253 out.go:177] * Starting "force-systemd-flag-890000" primary control-plane node in "force-systemd-flag-890000" cluster
	I0729 16:41:24.824486    4253 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:41:24.824503    4253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:41:24.824514    4253 cache.go:56] Caching tarball of preloaded images
	I0729 16:41:24.824582    4253 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:41:24.824587    4253 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:41:24.824648    4253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/force-systemd-flag-890000/config.json ...
	I0729 16:41:24.824660    4253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/force-systemd-flag-890000/config.json: {Name:mk60e27ec2fb4e0631420a990a4ec6a6936d8695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:24.824986    4253 start.go:360] acquireMachinesLock for force-systemd-flag-890000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:24.825035    4253 start.go:364] duration metric: took 41.459µs to acquireMachinesLock for "force-systemd-flag-890000"
	I0729 16:41:24.825046    4253 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-890000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-890000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:24.825075    4253 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:24.832623    4253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:24.849930    4253 start.go:159] libmachine.API.Create for "force-systemd-flag-890000" (driver="qemu2")
	I0729 16:41:24.849951    4253 client.go:168] LocalClient.Create starting
	I0729 16:41:24.850010    4253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:24.850044    4253 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:24.850056    4253 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:24.850097    4253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:24.850122    4253 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:24.850131    4253 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:24.850600    4253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:25.004825    4253 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:25.145399    4253 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:25.145405    4253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:25.145608    4253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2
	I0729 16:41:25.155196    4253 main.go:141] libmachine: STDOUT: 
	I0729 16:41:25.155217    4253 main.go:141] libmachine: STDERR: 
	I0729 16:41:25.155265    4253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2 +20000M
	I0729 16:41:25.163072    4253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:25.163090    4253 main.go:141] libmachine: STDERR: 
	I0729 16:41:25.163106    4253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2
	I0729 16:41:25.163146    4253 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:25.163155    4253 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:25.163184    4253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:34:21:4f:49:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2
	I0729 16:41:25.164844    4253 main.go:141] libmachine: STDOUT: 
	I0729 16:41:25.164860    4253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:25.164876    4253 client.go:171] duration metric: took 314.926334ms to LocalClient.Create
	I0729 16:41:27.167024    4253 start.go:128] duration metric: took 2.341965458s to createHost
	I0729 16:41:27.167087    4253 start.go:83] releasing machines lock for "force-systemd-flag-890000", held for 2.342075958s
	W0729 16:41:27.167143    4253 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:27.189344    4253 out.go:177] * Deleting "force-systemd-flag-890000" in qemu2 ...
	W0729 16:41:27.212044    4253 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:27.212063    4253 start.go:729] Will try again in 5 seconds ...
	I0729 16:41:32.214267    4253 start.go:360] acquireMachinesLock for force-systemd-flag-890000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:32.214638    4253 start.go:364] duration metric: took 289.333µs to acquireMachinesLock for "force-systemd-flag-890000"
	I0729 16:41:32.214785    4253 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-890000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-890000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:32.215074    4253 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:32.223547    4253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:32.273263    4253 start.go:159] libmachine.API.Create for "force-systemd-flag-890000" (driver="qemu2")
	I0729 16:41:32.273306    4253 client.go:168] LocalClient.Create starting
	I0729 16:41:32.273424    4253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:32.273496    4253 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:32.273516    4253 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:32.273579    4253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:32.273624    4253 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:32.273642    4253 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:32.274765    4253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:32.447293    4253 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:32.605161    4253 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:32.605169    4253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:32.605377    4253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2
	I0729 16:41:32.615133    4253 main.go:141] libmachine: STDOUT: 
	I0729 16:41:32.615150    4253 main.go:141] libmachine: STDERR: 
	I0729 16:41:32.615213    4253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2 +20000M
	I0729 16:41:32.623061    4253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:32.623076    4253 main.go:141] libmachine: STDERR: 
	I0729 16:41:32.623086    4253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2
	I0729 16:41:32.623093    4253 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:32.623104    4253 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:32.623133    4253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:bc:7b:37:d7:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-flag-890000/disk.qcow2
	I0729 16:41:32.624780    4253 main.go:141] libmachine: STDOUT: 
	I0729 16:41:32.624793    4253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:32.624806    4253 client.go:171] duration metric: took 351.500375ms to LocalClient.Create
	I0729 16:41:34.627021    4253 start.go:128] duration metric: took 2.411939s to createHost
	I0729 16:41:34.627133    4253 start.go:83] releasing machines lock for "force-systemd-flag-890000", held for 2.412504417s
	W0729 16:41:34.627476    4253 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-890000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-890000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:34.636839    4253 out.go:177] 
	W0729 16:41:34.644096    4253 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:41:34.644123    4253 out.go:239] * 
	* 
	W0729 16:41:34.646813    4253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:41:34.656130    4253 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-890000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-890000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-890000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.598667ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-890000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-890000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-890000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 16:41:34.752279 -0700 PDT m=+2324.858630668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-890000 -n force-systemd-flag-890000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-890000 -n force-systemd-flag-890000: exit status 7 (34.211792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-890000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-890000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-890000
--- FAIL: TestForceSystemdFlag (10.15s)

                                                
                                    
x
+
TestForceSystemdEnv (11.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-887000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-887000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.151872s)

                                                
                                                
-- stdout --
	* [force-systemd-env-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-887000" primary control-plane node in "force-systemd-env-887000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-887000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:41:18.361333    4220 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:41:18.361467    4220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:18.361470    4220 out.go:304] Setting ErrFile to fd 2...
	I0729 16:41:18.361473    4220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:41:18.361600    4220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:41:18.362629    4220 out.go:298] Setting JSON to false
	I0729 16:41:18.378721    4220 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2441,"bootTime":1722294037,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:41:18.378790    4220 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:41:18.386162    4220 out.go:177] * [force-systemd-env-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:41:18.393112    4220 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:41:18.393144    4220 notify.go:220] Checking for updates...
	I0729 16:41:18.398485    4220 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:41:18.401121    4220 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:41:18.404124    4220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:41:18.407156    4220 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:41:18.410127    4220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 16:41:18.413472    4220 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:41:18.413514    4220 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:41:18.418093    4220 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:41:18.425127    4220 start.go:297] selected driver: qemu2
	I0729 16:41:18.425136    4220 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:41:18.425145    4220 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:41:18.427386    4220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:41:18.430144    4220 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:41:18.433187    4220 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:41:18.433215    4220 cni.go:84] Creating CNI manager for ""
	I0729 16:41:18.433223    4220 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:41:18.433230    4220 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:41:18.433259    4220 start.go:340] cluster config:
	{Name:force-systemd-env-887000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:41:18.436895    4220 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:41:18.443927    4220 out.go:177] * Starting "force-systemd-env-887000" primary control-plane node in "force-systemd-env-887000" cluster
	I0729 16:41:18.448130    4220 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:41:18.448149    4220 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:41:18.448161    4220 cache.go:56] Caching tarball of preloaded images
	I0729 16:41:18.448238    4220 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:41:18.448245    4220 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:41:18.448306    4220 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/force-systemd-env-887000/config.json ...
	I0729 16:41:18.448317    4220 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/force-systemd-env-887000/config.json: {Name:mk48481c90ec5cb0853a23623dbe3666b9ad7338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:18.448535    4220 start.go:360] acquireMachinesLock for force-systemd-env-887000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:18.448570    4220 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "force-systemd-env-887000"
	I0729 16:41:18.448583    4220 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:18.448610    4220 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:18.454110    4220 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:18.471657    4220 start.go:159] libmachine.API.Create for "force-systemd-env-887000" (driver="qemu2")
	I0729 16:41:18.471681    4220 client.go:168] LocalClient.Create starting
	I0729 16:41:18.471753    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:18.471785    4220 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:18.471795    4220 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:18.471839    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:18.471866    4220 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:18.471874    4220 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:18.472226    4220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:18.626776    4220 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:18.759077    4220 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:18.759086    4220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:18.759280    4220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2
	I0729 16:41:18.768794    4220 main.go:141] libmachine: STDOUT: 
	I0729 16:41:18.768815    4220 main.go:141] libmachine: STDERR: 
	I0729 16:41:18.768875    4220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2 +20000M
	I0729 16:41:18.777075    4220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:18.777089    4220 main.go:141] libmachine: STDERR: 
	I0729 16:41:18.777110    4220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2
	I0729 16:41:18.777115    4220 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:18.777127    4220 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:18.777156    4220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:52:85:33:c1:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2
	I0729 16:41:18.778787    4220 main.go:141] libmachine: STDOUT: 
	I0729 16:41:18.778803    4220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:18.778819    4220 client.go:171] duration metric: took 307.137875ms to LocalClient.Create
	I0729 16:41:20.780860    4220 start.go:128] duration metric: took 2.33227625s to createHost
	I0729 16:41:20.780879    4220 start.go:83] releasing machines lock for "force-systemd-env-887000", held for 2.332337959s
	W0729 16:41:20.780899    4220 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:20.789517    4220 out.go:177] * Deleting "force-systemd-env-887000" in qemu2 ...
	W0729 16:41:20.798592    4220 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:20.798611    4220 start.go:729] Will try again in 5 seconds ...
	I0729 16:41:25.800796    4220 start.go:360] acquireMachinesLock for force-systemd-env-887000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:41:27.167275    4220 start.go:364] duration metric: took 1.366365791s to acquireMachinesLock for "force-systemd-env-887000"
	I0729 16:41:27.167395    4220 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:27.167642    4220 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:41:27.178125    4220 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:41:27.228154    4220 start.go:159] libmachine.API.Create for "force-systemd-env-887000" (driver="qemu2")
	I0729 16:41:27.228205    4220 client.go:168] LocalClient.Create starting
	I0729 16:41:27.228352    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:41:27.228413    4220 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:27.228430    4220 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:27.228487    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:41:27.228531    4220 main.go:141] libmachine: Decoding PEM data...
	I0729 16:41:27.228543    4220 main.go:141] libmachine: Parsing certificate...
	I0729 16:41:27.229218    4220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:41:27.393500    4220 main.go:141] libmachine: Creating SSH key...
	I0729 16:41:27.420946    4220 main.go:141] libmachine: Creating Disk image...
	I0729 16:41:27.420951    4220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:41:27.421138    4220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2
	I0729 16:41:27.430479    4220 main.go:141] libmachine: STDOUT: 
	I0729 16:41:27.430507    4220 main.go:141] libmachine: STDERR: 
	I0729 16:41:27.430566    4220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2 +20000M
	I0729 16:41:27.438456    4220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:41:27.438471    4220 main.go:141] libmachine: STDERR: 
	I0729 16:41:27.438482    4220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2
	I0729 16:41:27.438486    4220 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:41:27.438498    4220 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:41:27.438519    4220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:c3:ec:eb:1d:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/force-systemd-env-887000/disk.qcow2
	I0729 16:41:27.440136    4220 main.go:141] libmachine: STDOUT: 
	I0729 16:41:27.440150    4220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:41:27.440163    4220 client.go:171] duration metric: took 211.955209ms to LocalClient.Create
	I0729 16:41:29.442468    4220 start.go:128] duration metric: took 2.274803875s to createHost
	I0729 16:41:29.442548    4220 start.go:83] releasing machines lock for "force-systemd-env-887000", held for 2.2752525s
	W0729 16:41:29.442897    4220 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-887000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-887000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:41:29.451612    4220 out.go:177] 
	W0729 16:41:29.458627    4220 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:41:29.458658    4220 out.go:239] * 
	* 
	W0729 16:41:29.461274    4220 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:41:29.469523    4220 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-887000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-887000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-887000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.0545ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-887000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-887000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-887000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 16:41:29.564054 -0700 PDT m=+2319.670331751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-887000 -n force-systemd-env-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-887000 -n force-systemd-env-887000: exit status 7 (34.819917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-887000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-887000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-887000
--- FAIL: TestForceSystemdEnv (11.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-753000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-753000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-x4s9d" [38bee33c-b50e-47c8-80e1-f22b5a0ff484] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-x4s9d" [38bee33c-b50e-47c8-80e1-f22b5a0ff484] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004083125s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30939
functional_test.go:1657: error fetching http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30939: Get "http://192.168.105.4:30939": dial tcp 192.168.105.4:30939: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-753000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-x4s9d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-753000/192.168.105.4
Start Time:       Mon, 29 Jul 2024 16:13:52 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://fe2a64ec60da8c909ce4632f2a6ac07fa639348515a5accd8ef78cc103f4b89f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 29 Jul 2024 16:14:11 -0700
Finished:     Mon, 29 Jul 2024 16:14:11 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 29 Jul 2024 16:13:58 -0700
Finished:     Mon, 29 Jul 2024 16:13:58 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ggn4c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ggn4c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-x4s9d to functional-753000
Normal   Pulling    31s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     26s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.05s (4.992s including waiting). Image size: 84957542 bytes.
Normal   Created    12s (x3 over 25s)  kubelet            Created container echoserver-arm
Normal   Pulled     12s (x2 over 25s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Started    11s (x3 over 25s)  kubelet            Started container echoserver-arm
Warning  BackOff    11s (x3 over 24s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-x4s9d_default(38bee33c-b50e-47c8-80e1-f22b5a0ff484)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-753000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-753000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.169.114
IPs:                      10.101.169.114
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30939/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-753000 -n functional-753000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-753000 image load --daemon                                                                           | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	| image   | functional-753000 image load --daemon                                                                           | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	| image   | functional-753000 image load --daemon                                                                           | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	| image   | functional-753000 image save                                                                                    | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image rm                                                                                      | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	| image   | functional-753000 image load                                                                                    | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	| image   | functional-753000 image save --daemon                                                                           | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| addons  | functional-753000 addons list                                                                                   | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	| addons  | functional-753000 addons list                                                                                   | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:13 PDT | 29 Jul 24 16:13 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-753000 service                                                                                       | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| service | functional-753000 service list                                                                                  | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	| service | functional-753000 service list                                                                                  | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-753000 service                                                                                       | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-753000                                                                                               | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-753000 service                                                                                       | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| mount   | -p functional-753000                                                                                            | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3810492085/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh findmnt                                                                                   | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh findmnt                                                                                   | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh -- ls                                                                                     | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh cat                                                                                       | functional-753000 | jenkins | v1.33.1 | 29 Jul 24 16:14 PDT | 29 Jul 24 16:14 PDT |
	|         | /mount-9p/test-1722294859006997000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:12:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:12:57.861328    2008 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:12:57.861447    2008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:12:57.861449    2008 out.go:304] Setting ErrFile to fd 2...
	I0729 16:12:57.861452    2008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:12:57.861598    2008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:12:57.862675    2008 out.go:298] Setting JSON to false
	I0729 16:12:57.879505    2008 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":740,"bootTime":1722294037,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:12:57.879567    2008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:12:57.887798    2008 out.go:177] * [functional-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:12:57.893797    2008 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:12:57.893833    2008 notify.go:220] Checking for updates...
	I0729 16:12:57.900716    2008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:12:57.903778    2008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:12:57.906791    2008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:12:57.909694    2008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:12:57.912752    2008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:12:57.916030    2008 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:12:57.916074    2008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:12:57.919728    2008 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:12:57.926800    2008 start.go:297] selected driver: qemu2
	I0729 16:12:57.926804    2008 start.go:901] validating driver "qemu2" against &{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:12:57.926852    2008 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:12:57.929006    2008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:12:57.929037    2008 cni.go:84] Creating CNI manager for ""
	I0729 16:12:57.929043    2008 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:12:57.929085    2008 start.go:340] cluster config:
	{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:12:57.932270    2008 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:12:57.939783    2008 out.go:177] * Starting "functional-753000" primary control-plane node in "functional-753000" cluster
	I0729 16:12:57.943809    2008 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:12:57.943821    2008 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:12:57.943827    2008 cache.go:56] Caching tarball of preloaded images
	I0729 16:12:57.943882    2008 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:12:57.943886    2008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:12:57.943930    2008 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/config.json ...
	I0729 16:12:57.944233    2008 start.go:360] acquireMachinesLock for functional-753000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:12:57.944263    2008 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "functional-753000"
	I0729 16:12:57.944270    2008 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:12:57.944274    2008 fix.go:54] fixHost starting: 
	I0729 16:12:57.944841    2008 fix.go:112] recreateIfNeeded on functional-753000: state=Running err=<nil>
	W0729 16:12:57.944847    2008 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:12:57.948780    2008 out.go:177] * Updating the running qemu2 "functional-753000" VM ...
	I0729 16:12:57.956742    2008 machine.go:94] provisionDockerMachine start ...
	I0729 16:12:57.956775    2008 main.go:141] libmachine: Using SSH client type: native
	I0729 16:12:57.956885    2008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e56a10] 0x102e59270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 16:12:57.956887    2008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 16:12:57.998550    2008 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753000
	
	I0729 16:12:57.998560    2008 buildroot.go:166] provisioning hostname "functional-753000"
	I0729 16:12:57.998602    2008 main.go:141] libmachine: Using SSH client type: native
	I0729 16:12:57.998708    2008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e56a10] 0x102e59270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 16:12:57.998711    2008 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753000 && echo "functional-753000" | sudo tee /etc/hostname
	I0729 16:12:58.044063    2008 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753000
	
	I0729 16:12:58.044110    2008 main.go:141] libmachine: Using SSH client type: native
	I0729 16:12:58.044238    2008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e56a10] 0x102e59270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 16:12:58.044245    2008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:12:58.085901    2008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:12:58.085909    2008 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19347-923/.minikube CaCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19347-923/.minikube}
	I0729 16:12:58.085916    2008 buildroot.go:174] setting up certificates
	I0729 16:12:58.085919    2008 provision.go:84] configureAuth start
	I0729 16:12:58.085925    2008 provision.go:143] copyHostCerts
	I0729 16:12:58.086006    2008 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem, removing ...
	I0729 16:12:58.086010    2008 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem
	I0729 16:12:58.086253    2008 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem (1082 bytes)
	I0729 16:12:58.086434    2008 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem, removing ...
	I0729 16:12:58.086435    2008 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem
	I0729 16:12:58.086487    2008 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem (1123 bytes)
	I0729 16:12:58.086598    2008 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem, removing ...
	I0729 16:12:58.086599    2008 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem
	I0729 16:12:58.086644    2008 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem (1679 bytes)
	I0729 16:12:58.086737    2008 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem org=jenkins.functional-753000 san=[127.0.0.1 192.168.105.4 functional-753000 localhost minikube]
	I0729 16:12:58.197452    2008 provision.go:177] copyRemoteCerts
	I0729 16:12:58.197482    2008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:12:58.197487    2008 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0729 16:12:58.221919    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 16:12:58.230806    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 16:12:58.239590    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 16:12:58.247901    2008 provision.go:87] duration metric: took 161.980792ms to configureAuth
	I0729 16:12:58.247908    2008 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:12:58.248029    2008 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:12:58.248067    2008 main.go:141] libmachine: Using SSH client type: native
	I0729 16:12:58.248150    2008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e56a10] 0x102e59270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 16:12:58.248153    2008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 16:12:58.289939    2008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 16:12:58.289945    2008 buildroot.go:70] root file system type: tmpfs
	I0729 16:12:58.289993    2008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 16:12:58.290052    2008 main.go:141] libmachine: Using SSH client type: native
	I0729 16:12:58.290174    2008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e56a10] 0x102e59270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 16:12:58.290205    2008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 16:12:58.334978    2008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 16:12:58.335037    2008 main.go:141] libmachine: Using SSH client type: native
	I0729 16:12:58.335145    2008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e56a10] 0x102e59270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 16:12:58.335151    2008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 16:12:58.376783    2008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:12:58.376790    2008 machine.go:97] duration metric: took 420.052292ms to provisionDockerMachine
	I0729 16:12:58.376794    2008 start.go:293] postStartSetup for "functional-753000" (driver="qemu2")
	I0729 16:12:58.376799    2008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:12:58.376845    2008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:12:58.376851    2008 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0729 16:12:58.399725    2008 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:12:58.401347    2008 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 16:12:58.401352    2008 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/addons for local assets ...
	I0729 16:12:58.401449    2008 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/files for local assets ...
	I0729 16:12:58.401568    2008 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem -> 13902.pem in /etc/ssl/certs
	I0729 16:12:58.401687    2008 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/test/nested/copy/1390/hosts -> hosts in /etc/test/nested/copy/1390
	I0729 16:12:58.401721    2008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1390
	I0729 16:12:58.405094    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:12:58.413522    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/test/nested/copy/1390/hosts --> /etc/test/nested/copy/1390/hosts (40 bytes)
	I0729 16:12:58.422230    2008 start.go:296] duration metric: took 45.431666ms for postStartSetup
	I0729 16:12:58.422240    2008 fix.go:56] duration metric: took 477.975125ms for fixHost
	I0729 16:12:58.422276    2008 main.go:141] libmachine: Using SSH client type: native
	I0729 16:12:58.422389    2008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e56a10] 0x102e59270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 16:12:58.422391    2008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 16:12:58.464061    2008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722294778.496933539
	
	I0729 16:12:58.464065    2008 fix.go:216] guest clock: 1722294778.496933539
	I0729 16:12:58.464068    2008 fix.go:229] Guest: 2024-07-29 16:12:58.496933539 -0700 PDT Remote: 2024-07-29 16:12:58.422241 -0700 PDT m=+0.580698960 (delta=74.692539ms)
	I0729 16:12:58.464085    2008 fix.go:200] guest clock delta is within tolerance: 74.692539ms
	I0729 16:12:58.464087    2008 start.go:83] releasing machines lock for "functional-753000", held for 519.830792ms
	I0729 16:12:58.464409    2008 ssh_runner.go:195] Run: cat /version.json
	I0729 16:12:58.464414    2008 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0729 16:12:58.464429    2008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:12:58.464446    2008 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0729 16:12:58.528978    2008 ssh_runner.go:195] Run: systemctl --version
	I0729 16:12:58.531128    2008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:12:58.532921    2008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:12:58.532945    2008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 16:12:58.536141    2008 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 16:12:58.536147    2008 start.go:495] detecting cgroup driver to use...
	I0729 16:12:58.536213    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:12:58.542953    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0729 16:12:58.546619    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 16:12:58.550434    2008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 16:12:58.550463    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 16:12:58.554203    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:12:58.558369    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 16:12:58.562425    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:12:58.566667    2008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:12:58.571165    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 16:12:58.575281    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 16:12:58.579312    2008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 16:12:58.583439    2008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:12:58.587348    2008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:12:58.590812    2008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:12:58.687226    2008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 16:12:58.698346    2008 start.go:495] detecting cgroup driver to use...
	I0729 16:12:58.698408    2008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 16:12:58.704827    2008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:12:58.714219    2008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:12:58.721337    2008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:12:58.726902    2008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:12:58.732003    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:12:58.738599    2008 ssh_runner.go:195] Run: which cri-dockerd
	I0729 16:12:58.740142    2008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 16:12:58.743725    2008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 16:12:58.749493    2008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 16:12:58.839087    2008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 16:12:58.930819    2008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 16:12:58.930869    2008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 16:12:58.937484    2008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:12:59.028113    2008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:13:11.345213    2008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.317302875s)
	I0729 16:13:11.345284    2008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 16:13:11.351394    2008 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 16:13:11.361165    2008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:13:11.366459    2008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 16:13:11.443147    2008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 16:13:11.527405    2008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:13:11.591040    2008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 16:13:11.597830    2008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:13:11.603294    2008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:13:11.677269    2008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 16:13:11.707962    2008 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 16:13:11.708035    2008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 16:13:11.710493    2008 start.go:563] Will wait 60s for crictl version
	I0729 16:13:11.710530    2008 ssh_runner.go:195] Run: which crictl
	I0729 16:13:11.711957    2008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:13:11.730006    2008 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0729 16:13:11.730094    2008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:13:11.737101    2008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:13:11.753148    2008 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0729 16:13:11.753277    2008 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0729 16:13:11.758221    2008 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0729 16:13:11.762091    2008 kubeadm.go:883] updating cluster {Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 16:13:11.762155    2008 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:13:11.762218    2008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:13:11.768360    2008 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-753000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0729 16:13:11.768364    2008 docker.go:615] Images already preloaded, skipping extraction
	I0729 16:13:11.768413    2008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:13:11.773990    2008 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-753000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0729 16:13:11.773995    2008 cache_images.go:84] Images are preloaded, skipping loading
	I0729 16:13:11.773999    2008 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.30.3 docker true true} ...
	I0729 16:13:11.774054    2008 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:13:11.774109    2008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 16:13:11.790098    2008 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0729 16:13:11.790138    2008 cni.go:84] Creating CNI manager for ""
	I0729 16:13:11.790144    2008 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:13:11.790148    2008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:13:11.790157    2008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753000 NodeName:functional-753000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:13:11.790220    2008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-753000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:13:11.790276    2008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 16:13:11.794321    2008 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:13:11.794357    2008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:13:11.798027    2008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 16:13:11.803828    2008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:13:11.809763    2008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0729 16:13:11.815560    2008 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0729 16:13:11.816984    2008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:13:11.900029    2008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:13:11.905924    2008 certs.go:68] Setting up /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000 for IP: 192.168.105.4
	I0729 16:13:11.905927    2008 certs.go:194] generating shared ca certs ...
	I0729 16:13:11.905934    2008 certs.go:226] acquiring lock for ca certs: {Name:mk4279a132dfe000316d0221b0d97d4e537df506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:13:11.906080    2008 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key
	I0729 16:13:11.906140    2008 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key
	I0729 16:13:11.906144    2008 certs.go:256] generating profile certs ...
	I0729 16:13:11.906206    2008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.key
	I0729 16:13:11.906256    2008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/apiserver.key.7b1be317
	I0729 16:13:11.906301    2008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/proxy-client.key
	I0729 16:13:11.906448    2008 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem (1338 bytes)
	W0729 16:13:11.906474    2008 certs.go:480] ignoring /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390_empty.pem, impossibly tiny 0 bytes
	I0729 16:13:11.906479    2008 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 16:13:11.906496    2008 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem (1082 bytes)
	I0729 16:13:11.906518    2008 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:13:11.906534    2008 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem (1679 bytes)
	I0729 16:13:11.906571    2008 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:13:11.906898    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:13:11.915535    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:13:11.923933    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:13:11.931714    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 16:13:11.939514    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 16:13:11.947506    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:13:11.955476    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:13:11.963669    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 16:13:11.971814    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:13:11.979936    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem --> /usr/share/ca-certificates/1390.pem (1338 bytes)
	I0729 16:13:11.988413    2008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /usr/share/ca-certificates/13902.pem (1708 bytes)
	I0729 16:13:11.996618    2008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:13:12.002517    2008 ssh_runner.go:195] Run: openssl version
	I0729 16:13:12.004560    2008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:13:12.008215    2008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:13:12.009631    2008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:13:12.009649    2008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:13:12.011743    2008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:13:12.015132    2008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1390.pem && ln -fs /usr/share/ca-certificates/1390.pem /etc/ssl/certs/1390.pem"
	I0729 16:13:12.018886    2008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1390.pem
	I0729 16:13:12.020306    2008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:11 /usr/share/ca-certificates/1390.pem
	I0729 16:13:12.020324    2008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1390.pem
	I0729 16:13:12.022207    2008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1390.pem /etc/ssl/certs/51391683.0"
	I0729 16:13:12.025850    2008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13902.pem && ln -fs /usr/share/ca-certificates/13902.pem /etc/ssl/certs/13902.pem"
	I0729 16:13:12.029670    2008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13902.pem
	I0729 16:13:12.031399    2008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:11 /usr/share/ca-certificates/13902.pem
	I0729 16:13:12.031414    2008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13902.pem
	I0729 16:13:12.033525    2008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13902.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 16:13:12.036812    2008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:13:12.038287    2008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 16:13:12.040265    2008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 16:13:12.042268    2008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 16:13:12.045568    2008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 16:13:12.047640    2008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 16:13:12.049638    2008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 16:13:12.051638    2008 kubeadm.go:392] StartCluster: {Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:13:12.051709    2008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:13:12.058754    2008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:13:12.062682    2008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 16:13:12.062684    2008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 16:13:12.062710    2008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 16:13:12.066315    2008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:13:12.066596    2008 kubeconfig.go:125] found "functional-753000" server: "https://192.168.105.4:8441"
	I0729 16:13:12.067197    2008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 16:13:12.070751    2008 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0729 16:13:12.070754    2008 kubeadm.go:1160] stopping kube-system containers ...
	I0729 16:13:12.070795    2008 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:13:12.078286    2008 docker.go:483] Stopping containers: [e6877e10af10 d7d6a60fef80 a3bd1ff5e5ff 5c0ee321fcb9 fde6ace2974e d5ad4ec0c2ea 56da5c9c1469 6df894180cc9 48f8cd99718b 5ce2150b8c84 317e5b72f38c 42b5c6585f2b 79776bc5de1d 6213636804f4 4988032c49da b9183d5d4f47 789e74422d15 7a6ab00b1fe0 54add683f3b4 25d85d64ca4e 305b1770850e 3fd3dd4682f6 65f9e2c46907 9b48040ac73f f2f1a39d60b6 0e97216155de 7654363d9ab9 3a8e6268b9bc 79f1b4ecd224 ab4c73791001]
	I0729 16:13:12.078349    2008 ssh_runner.go:195] Run: docker stop e6877e10af10 d7d6a60fef80 a3bd1ff5e5ff 5c0ee321fcb9 fde6ace2974e d5ad4ec0c2ea 56da5c9c1469 6df894180cc9 48f8cd99718b 5ce2150b8c84 317e5b72f38c 42b5c6585f2b 79776bc5de1d 6213636804f4 4988032c49da b9183d5d4f47 789e74422d15 7a6ab00b1fe0 54add683f3b4 25d85d64ca4e 305b1770850e 3fd3dd4682f6 65f9e2c46907 9b48040ac73f f2f1a39d60b6 0e97216155de 7654363d9ab9 3a8e6268b9bc 79f1b4ecd224 ab4c73791001
	I0729 16:13:12.086024    2008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 16:13:12.179339    2008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:13:12.184551    2008 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 29 23:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jul 29 23:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 29 23:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 29 23:12 /etc/kubernetes/scheduler.conf
	
	I0729 16:13:12.184590    2008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0729 16:13:12.189038    2008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0729 16:13:12.193254    2008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0729 16:13:12.197558    2008 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:13:12.197589    2008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:13:12.201661    2008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0729 16:13:12.205386    2008 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:13:12.205406    2008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:13:12.208871    2008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:13:12.212207    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:13:12.231056    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:13:12.733458    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:13:12.838129    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:13:12.870215    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:13:12.902573    2008 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:13:12.902621    2008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:13:13.404702    2008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:13:13.904722    2008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:13:13.910162    2008 api_server.go:72] duration metric: took 1.007606458s to wait for apiserver process to appear ...
	I0729 16:13:13.910172    2008 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:13:13.910183    2008 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 16:13:15.519844    2008 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 16:13:15.519853    2008 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 16:13:15.519859    2008 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 16:13:15.559375    2008 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 16:13:15.559387    2008 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 16:13:15.912190    2008 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 16:13:15.914634    2008 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 16:13:15.914640    2008 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 16:13:16.412212    2008 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 16:13:16.414878    2008 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 16:13:16.414885    2008 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 16:13:16.912217    2008 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 16:13:16.915351    2008 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0729 16:13:16.921737    2008 api_server.go:141] control plane version: v1.30.3
	I0729 16:13:16.921746    2008 api_server.go:131] duration metric: took 3.011624333s to wait for apiserver health ...
	I0729 16:13:16.921751    2008 cni.go:84] Creating CNI manager for ""
	I0729 16:13:16.921756    2008 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:13:16.927021    2008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:13:16.930872    2008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:13:16.934980    2008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:13:16.940528    2008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 16:13:16.946022    2008 system_pods.go:59] 7 kube-system pods found
	I0729 16:13:16.946033    2008 system_pods.go:61] "coredns-7db6d8ff4d-hvthm" [1951777c-6d07-4a5d-bbd5-fbf50a631100] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 16:13:16.946036    2008 system_pods.go:61] "etcd-functional-753000" [f1194b7b-4788-4b36-beac-c0d1a7fdc5b3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 16:13:16.946043    2008 system_pods.go:61] "kube-apiserver-functional-753000" [eb713111-1dc7-4347-8b5d-858d157a12c3] Pending
	I0729 16:13:16.946046    2008 system_pods.go:61] "kube-controller-manager-functional-753000" [917653fe-db7a-4119-9e8f-8ef9645e0d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 16:13:16.946048    2008 system_pods.go:61] "kube-proxy-qmxwr" [4b581958-820e-4218-8175-b089c192d161] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 16:13:16.946050    2008 system_pods.go:61] "kube-scheduler-functional-753000" [52a1bf17-0779-41d9-aff6-794629477c8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 16:13:16.946052    2008 system_pods.go:61] "storage-provisioner" [914a5f06-d2fb-4702-8bb3-ec79da5263eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 16:13:16.946054    2008 system_pods.go:74] duration metric: took 5.523417ms to wait for pod list to return data ...
	I0729 16:13:16.946057    2008 node_conditions.go:102] verifying NodePressure condition ...
	I0729 16:13:16.947635    2008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 16:13:16.947640    2008 node_conditions.go:123] node cpu capacity is 2
	I0729 16:13:16.947644    2008 node_conditions.go:105] duration metric: took 1.585333ms to run NodePressure ...
	I0729 16:13:16.947650    2008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:13:17.169256    2008 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 16:13:17.171460    2008 kubeadm.go:739] kubelet initialised
	I0729 16:13:17.171465    2008 kubeadm.go:740] duration metric: took 2.201666ms waiting for restarted kubelet to initialise ...
	I0729 16:13:17.171468    2008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:13:17.173892    2008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:19.179180    2008 pod_ready.go:102] pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace has status "Ready":"False"
	I0729 16:13:21.179174    2008 pod_ready.go:102] pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace has status "Ready":"False"
	I0729 16:13:21.678874    2008 pod_ready.go:92] pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:21.678882    2008 pod_ready.go:81] duration metric: took 4.505063333s for pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:21.678888    2008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:23.683828    2008 pod_ready.go:102] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"False"
	I0729 16:13:26.183273    2008 pod_ready.go:102] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"False"
	I0729 16:13:28.185548    2008 pod_ready.go:102] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"False"
	I0729 16:13:30.683525    2008 pod_ready.go:102] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"False"
	I0729 16:13:31.183352    2008 pod_ready.go:92] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:31.183359    2008 pod_ready.go:81] duration metric: took 9.504634583s for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.183362    2008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.185244    2008 pod_ready.go:92] pod "kube-apiserver-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:31.185247    2008 pod_ready.go:81] duration metric: took 1.882ms for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.185250    2008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.187235    2008 pod_ready.go:92] pod "kube-controller-manager-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:31.187238    2008 pod_ready.go:81] duration metric: took 1.9855ms for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.187242    2008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qmxwr" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.189166    2008 pod_ready.go:92] pod "kube-proxy-qmxwr" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:31.189169    2008 pod_ready.go:81] duration metric: took 1.924833ms for pod "kube-proxy-qmxwr" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.189172    2008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.191107    2008 pod_ready.go:92] pod "kube-scheduler-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:31.191109    2008 pod_ready.go:81] duration metric: took 1.935ms for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.191112    2008 pod_ready.go:38] duration metric: took 14.01988625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:13:31.191120    2008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:13:31.195240    2008 ops.go:34] apiserver oom_adj: -16
	I0729 16:13:31.195243    2008 kubeadm.go:597] duration metric: took 19.132891375s to restartPrimaryControlPlane
	I0729 16:13:31.195246    2008 kubeadm.go:394] duration metric: took 19.143944791s to StartCluster
	I0729 16:13:31.195253    2008 settings.go:142] acquiring lock: {Name:mk3b097bc26d2850dd7467a616788f5486d088c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:13:31.195341    2008 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:13:31.195634    2008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:13:31.195845    2008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:13:31.195890    2008 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:13:31.195920    2008 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:13:31.195927    2008 addons.go:69] Setting storage-provisioner=true in profile "functional-753000"
	I0729 16:13:31.195937    2008 addons.go:234] Setting addon storage-provisioner=true in "functional-753000"
	W0729 16:13:31.195940    2008 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:13:31.195950    2008 host.go:66] Checking if "functional-753000" exists ...
	I0729 16:13:31.195957    2008 addons.go:69] Setting default-storageclass=true in profile "functional-753000"
	I0729 16:13:31.195972    2008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753000"
	I0729 16:13:31.196199    2008 retry.go:31] will retry after 914.59124ms: connect: dial unix /Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/monitor: connect: connection refused
	I0729 16:13:31.196886    2008 addons.go:234] Setting addon default-storageclass=true in "functional-753000"
	W0729 16:13:31.196889    2008 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:13:31.196895    2008 host.go:66] Checking if "functional-753000" exists ...
	I0729 16:13:31.197471    2008 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:13:31.197474    2008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:13:31.197478    2008 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0729 16:13:31.199910    2008 out.go:177] * Verifying Kubernetes components...
	I0729 16:13:31.206832    2008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:13:31.297463    2008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:13:31.303568    2008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:13:31.307169    2008 node_ready.go:35] waiting up to 6m0s for node "functional-753000" to be "Ready" ...
	I0729 16:13:31.383924    2008 node_ready.go:49] node "functional-753000" has status "Ready":"True"
	I0729 16:13:31.383930    2008 node_ready.go:38] duration metric: took 76.752792ms for node "functional-753000" to be "Ready" ...
	I0729 16:13:31.383933    2008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:13:31.587498    2008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.983921    2008 pod_ready.go:92] pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:31.983926    2008 pod_ready.go:81] duration metric: took 396.428375ms for pod "coredns-7db6d8ff4d-hvthm" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:31.983930    2008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:32.117931    2008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:13:32.121939    2008 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:13:32.121944    2008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:13:32.121960    2008 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0729 16:13:32.150831    2008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:13:32.383372    2008 pod_ready.go:92] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:32.383380    2008 pod_ready.go:81] duration metric: took 399.454375ms for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:32.383383    2008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:32.424117    2008 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 16:13:32.431897    2008 addons.go:510] duration metric: took 1.236055458s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 16:13:32.784220    2008 pod_ready.go:92] pod "kube-apiserver-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:32.784228    2008 pod_ready.go:81] duration metric: took 400.849375ms for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:32.784234    2008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:33.184122    2008 pod_ready.go:92] pod "kube-controller-manager-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:33.184128    2008 pod_ready.go:81] duration metric: took 399.897833ms for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:33.184132    2008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qmxwr" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:33.584231    2008 pod_ready.go:92] pod "kube-proxy-qmxwr" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:33.584236    2008 pod_ready.go:81] duration metric: took 400.108125ms for pod "kube-proxy-qmxwr" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:33.584241    2008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:33.984686    2008 pod_ready.go:92] pod "kube-scheduler-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0729 16:13:33.984694    2008 pod_ready.go:81] duration metric: took 400.456917ms for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0729 16:13:33.984698    2008 pod_ready.go:38] duration metric: took 2.600805875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:13:33.984708    2008 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:13:33.984774    2008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:13:33.990575    2008 api_server.go:72] duration metric: took 2.7947705s to wait for apiserver process to appear ...
	I0729 16:13:33.990579    2008 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:13:33.990585    2008 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 16:13:33.993559    2008 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0729 16:13:33.994032    2008 api_server.go:141] control plane version: v1.30.3
	I0729 16:13:33.994036    2008 api_server.go:131] duration metric: took 3.454667ms to wait for apiserver health ...
	I0729 16:13:33.994038    2008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 16:13:34.185893    2008 system_pods.go:59] 7 kube-system pods found
	I0729 16:13:34.185901    2008 system_pods.go:61] "coredns-7db6d8ff4d-hvthm" [1951777c-6d07-4a5d-bbd5-fbf50a631100] Running
	I0729 16:13:34.185903    2008 system_pods.go:61] "etcd-functional-753000" [f1194b7b-4788-4b36-beac-c0d1a7fdc5b3] Running
	I0729 16:13:34.185904    2008 system_pods.go:61] "kube-apiserver-functional-753000" [eb713111-1dc7-4347-8b5d-858d157a12c3] Running
	I0729 16:13:34.185905    2008 system_pods.go:61] "kube-controller-manager-functional-753000" [917653fe-db7a-4119-9e8f-8ef9645e0d4e] Running
	I0729 16:13:34.185906    2008 system_pods.go:61] "kube-proxy-qmxwr" [4b581958-820e-4218-8175-b089c192d161] Running
	I0729 16:13:34.185907    2008 system_pods.go:61] "kube-scheduler-functional-753000" [52a1bf17-0779-41d9-aff6-794629477c8a] Running
	I0729 16:13:34.185908    2008 system_pods.go:61] "storage-provisioner" [914a5f06-d2fb-4702-8bb3-ec79da5263eb] Running
	I0729 16:13:34.185911    2008 system_pods.go:74] duration metric: took 191.874291ms to wait for pod list to return data ...
	I0729 16:13:34.185913    2008 default_sa.go:34] waiting for default service account to be created ...
	I0729 16:13:34.382759    2008 default_sa.go:45] found service account: "default"
	I0729 16:13:34.382762    2008 default_sa.go:55] duration metric: took 196.850459ms for default service account to be created ...
	I0729 16:13:34.382765    2008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 16:13:34.585708    2008 system_pods.go:86] 7 kube-system pods found
	I0729 16:13:34.585715    2008 system_pods.go:89] "coredns-7db6d8ff4d-hvthm" [1951777c-6d07-4a5d-bbd5-fbf50a631100] Running
	I0729 16:13:34.585717    2008 system_pods.go:89] "etcd-functional-753000" [f1194b7b-4788-4b36-beac-c0d1a7fdc5b3] Running
	I0729 16:13:34.585718    2008 system_pods.go:89] "kube-apiserver-functional-753000" [eb713111-1dc7-4347-8b5d-858d157a12c3] Running
	I0729 16:13:34.585720    2008 system_pods.go:89] "kube-controller-manager-functional-753000" [917653fe-db7a-4119-9e8f-8ef9645e0d4e] Running
	I0729 16:13:34.585722    2008 system_pods.go:89] "kube-proxy-qmxwr" [4b581958-820e-4218-8175-b089c192d161] Running
	I0729 16:13:34.585723    2008 system_pods.go:89] "kube-scheduler-functional-753000" [52a1bf17-0779-41d9-aff6-794629477c8a] Running
	I0729 16:13:34.585724    2008 system_pods.go:89] "storage-provisioner" [914a5f06-d2fb-4702-8bb3-ec79da5263eb] Running
	I0729 16:13:34.585726    2008 system_pods.go:126] duration metric: took 202.963333ms to wait for k8s-apps to be running ...
	I0729 16:13:34.585728    2008 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 16:13:34.585801    2008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:13:34.592464    2008 system_svc.go:56] duration metric: took 6.733ms WaitForService to wait for kubelet
	I0729 16:13:34.592472    2008 kubeadm.go:582] duration metric: took 3.396677792s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:13:34.592481    2008 node_conditions.go:102] verifying NodePressure condition ...
	I0729 16:13:34.784665    2008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 16:13:34.784670    2008 node_conditions.go:123] node cpu capacity is 2
	I0729 16:13:34.784675    2008 node_conditions.go:105] duration metric: took 192.196041ms to run NodePressure ...
	I0729 16:13:34.784680    2008 start.go:241] waiting for startup goroutines ...
	I0729 16:13:34.784684    2008 start.go:246] waiting for cluster config update ...
	I0729 16:13:34.784689    2008 start.go:255] writing updated cluster config ...
	I0729 16:13:34.785063    2008 ssh_runner.go:195] Run: rm -f paused
	I0729 16:13:34.814642    2008 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0729 16:13:34.818115    2008 out.go:177] * Done! kubectl is now configured to use "functional-753000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 29 23:14:12 functional-753000 dockerd[6089]: time="2024-07-29T23:14:12.375997815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:14:12 functional-753000 dockerd[6089]: time="2024-07-29T23:14:12.376416313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:14:12 functional-753000 dockerd[6089]: time="2024-07-29T23:14:12.402693418Z" level=info msg="shim disconnected" id=a04666938a406199a6a75f3f4c9d93839b97d775d5d2b8d0b4e745ff31e04020 namespace=moby
	Jul 29 23:14:12 functional-753000 dockerd[6089]: time="2024-07-29T23:14:12.402722448Z" level=warning msg="cleaning up after shim disconnected" id=a04666938a406199a6a75f3f4c9d93839b97d775d5d2b8d0b4e745ff31e04020 namespace=moby
	Jul 29 23:14:12 functional-753000 dockerd[6089]: time="2024-07-29T23:14:12.402726780Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 23:14:12 functional-753000 dockerd[6083]: time="2024-07-29T23:14:12.402932447Z" level=info msg="ignoring event" container=a04666938a406199a6a75f3f4c9d93839b97d775d5d2b8d0b4e745ff31e04020 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 23:14:20 functional-753000 dockerd[6089]: time="2024-07-29T23:14:20.066965679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:14:20 functional-753000 dockerd[6089]: time="2024-07-29T23:14:20.067007621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:14:20 functional-753000 dockerd[6089]: time="2024-07-29T23:14:20.067021573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:14:20 functional-753000 dockerd[6089]: time="2024-07-29T23:14:20.067054143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:14:20 functional-753000 cri-dockerd[6351]: time="2024-07-29T23:14:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/66ab0ed1a647f5ac8ba8ba49fce35a10266344ae0e42b02e21c749979dc1598d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 23:14:21 functional-753000 cri-dockerd[6351]: time="2024-07-29T23:14:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 29 23:14:21 functional-753000 dockerd[6089]: time="2024-07-29T23:14:21.452665464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:14:21 functional-753000 dockerd[6089]: time="2024-07-29T23:14:21.452700659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:14:21 functional-753000 dockerd[6089]: time="2024-07-29T23:14:21.452709947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:14:21 functional-753000 dockerd[6089]: time="2024-07-29T23:14:21.452745141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:14:21 functional-753000 dockerd[6083]: time="2024-07-29T23:14:21.486406637Z" level=info msg="ignoring event" container=5e41fd095e26b9e33634ab1689bce2a635f5f4877cee7da11da5926419a062f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 23:14:21 functional-753000 dockerd[6089]: time="2024-07-29T23:14:21.486615805Z" level=info msg="shim disconnected" id=5e41fd095e26b9e33634ab1689bce2a635f5f4877cee7da11da5926419a062f8 namespace=moby
	Jul 29 23:14:21 functional-753000 dockerd[6089]: time="2024-07-29T23:14:21.486649958Z" level=warning msg="cleaning up after shim disconnected" id=5e41fd095e26b9e33634ab1689bce2a635f5f4877cee7da11da5926419a062f8 namespace=moby
	Jul 29 23:14:21 functional-753000 dockerd[6089]: time="2024-07-29T23:14:21.486654623Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 23:14:23 functional-753000 dockerd[6089]: time="2024-07-29T23:14:23.466187455Z" level=info msg="shim disconnected" id=66ab0ed1a647f5ac8ba8ba49fce35a10266344ae0e42b02e21c749979dc1598d namespace=moby
	Jul 29 23:14:23 functional-753000 dockerd[6089]: time="2024-07-29T23:14:23.466217401Z" level=warning msg="cleaning up after shim disconnected" id=66ab0ed1a647f5ac8ba8ba49fce35a10266344ae0e42b02e21c749979dc1598d namespace=moby
	Jul 29 23:14:23 functional-753000 dockerd[6089]: time="2024-07-29T23:14:23.466223190Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 23:14:23 functional-753000 dockerd[6083]: time="2024-07-29T23:14:23.466361261Z" level=info msg="ignoring event" container=66ab0ed1a647f5ac8ba8ba49fce35a10266344ae0e42b02e21c749979dc1598d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 23:14:23 functional-753000 dockerd[6089]: time="2024-07-29T23:14:23.470129776Z" level=warning msg="cleanup warnings time=\"2024-07-29T23:14:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5e41fd095e26b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 seconds ago        Exited              mount-munger              0                   66ab0ed1a647f       busybox-mount
	a04666938a406       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            1                   1888abd440e45       hello-node-65f5d5cc78-qwt2w
	fe2a64ec60da8       72565bf5bbedf                                                                                         12 seconds ago       Exited              echoserver-arm            2                   e1160dca30f58       hello-node-connect-6f49f58cd5-x4s9d
	04d5fb43f8818       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         19 seconds ago       Running             myfrontend                0                   2f29ad08281c9       sp-pod
	d5e455e604e02       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         38 seconds ago       Running             nginx                     0                   00c951de53946       nginx-svc
	44dad924a0324       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   c1b2ef8e292e6       coredns-7db6d8ff4d-hvthm
	26aace26f0e41       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   55d4707122c81       storage-provisioner
	86c2c179b2d05       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   a25f19583b96d       kube-proxy-qmxwr
	f4a9c4ababd8e       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   2bb2cbb5fac17       kube-scheduler-functional-753000
	bf48719bc85b1       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   07e5a2cd6297b       kube-controller-manager-functional-753000
	ed84fe0b3c80f       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   83def46bbb1a7       etcd-functional-753000
	ea653298043d9       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   1b2692ced003c       kube-apiserver-functional-753000
	e6877e10af10e       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   5c0ee321fcb98       coredns-7db6d8ff4d-hvthm
	d7d6a60fef80f       2351f570ed0ea                                                                                         About a minute ago   Exited              kube-proxy                1                   d5ad4ec0c2ea2       kube-proxy-qmxwr
	a3bd1ff5e5ff5       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   fde6ace2974e9       storage-provisioner
	6df894180cc94       8e97cdb19e7cc                                                                                         About a minute ago   Exited              kube-controller-manager   1                   42b5c6585f2bd       kube-controller-manager-functional-753000
	48f8cd99718bf       014faa467e297                                                                                         About a minute ago   Exited              etcd                      1                   79776bc5de1dc       etcd-functional-753000
	5ce2150b8c84e       d48f992a22722                                                                                         About a minute ago   Exited              kube-scheduler            1                   317e5b72f38c4       kube-scheduler-functional-753000
	
	
	==> coredns [44dad924a032] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60402 - 37294 "HINFO IN 1773491451253199821.5283104859372594300. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004666837s
	[INFO] 10.244.0.1:61964 - 30026 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000178759s
	[INFO] 10.244.0.1:1083 - 34909 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000058184s
	[INFO] 10.244.0.1:18664 - 40996 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000057851s
	[INFO] 10.244.0.1:51447 - 49530 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001944404s
	[INFO] 10.244.0.1:5664 - 14500 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000067181s
	[INFO] 10.244.0.1:29773 - 45619 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000021741s
	
	
	==> coredns [e6877e10af10] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48855 - 60228 "HINFO IN 8066170242919402207.5496413154315865555. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010771777s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-753000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-753000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3
	                    minikube.k8s.io/name=functional-753000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T16_11_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 23:11:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-753000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 23:14:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 23:14:17 +0000   Mon, 29 Jul 2024 23:11:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 23:14:17 +0000   Mon, 29 Jul 2024 23:11:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 23:14:17 +0000   Mon, 29 Jul 2024 23:11:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 23:14:17 +0000   Mon, 29 Jul 2024 23:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-753000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 42d44260bc44488cab5b0195c320c6d0
	  System UUID:                42d44260bc44488cab5b0195c320c6d0
	  Boot ID:                    df51780e-1221-4907-b8c7-8e82c13b2574
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-qwt2w                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  default                     hello-node-connect-6f49f58cd5-x4s9d          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 coredns-7db6d8ff4d-hvthm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m13s
	  kube-system                 etcd-functional-753000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-apiserver-functional-753000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-functional-753000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-proxy-qmxwr                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-scheduler-functional-753000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m12s                kube-proxy       
	  Normal  Starting                 67s                  kube-proxy       
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m27s                kubelet          Node functional-753000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m27s                kubelet          Node functional-753000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s                kubelet          Node functional-753000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m27s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m24s                kubelet          Node functional-753000 status is now: NodeReady
	  Normal  RegisteredNode           2m14s                node-controller  Node functional-753000 event: Registered Node functional-753000 in Controller
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node functional-753000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node functional-753000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)  kubelet          Node functional-753000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                 node-controller  Node functional-753000 event: Registered Node functional-753000 in Controller
	  Normal  Starting                 72s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  72s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  71s (x8 over 72s)    kubelet          Node functional-753000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 72s)    kubelet          Node functional-753000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 72s)    kubelet          Node functional-753000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                  node-controller  Node functional-753000 event: Registered Node functional-753000 in Controller
	
	
	==> dmesg <==
	[ +12.039381] kauditd_printk_skb: 31 callbacks suppressed
	[  +4.486108] systemd-fstab-generator[5172]: Ignoring "noauto" option for root device
	[  +9.293541] systemd-fstab-generator[5599]: Ignoring "noauto" option for root device
	[  +0.055272] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.099761] systemd-fstab-generator[5633]: Ignoring "noauto" option for root device
	[  +0.092662] systemd-fstab-generator[5645]: Ignoring "noauto" option for root device
	[  +0.096397] systemd-fstab-generator[5659]: Ignoring "noauto" option for root device
	[Jul29 23:13] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.320489] systemd-fstab-generator[6299]: Ignoring "noauto" option for root device
	[  +0.085770] systemd-fstab-generator[6311]: Ignoring "noauto" option for root device
	[  +0.064889] systemd-fstab-generator[6323]: Ignoring "noauto" option for root device
	[  +0.084425] systemd-fstab-generator[6338]: Ignoring "noauto" option for root device
	[  +0.221723] systemd-fstab-generator[6515]: Ignoring "noauto" option for root device
	[  +0.934296] systemd-fstab-generator[6638]: Ignoring "noauto" option for root device
	[  +3.403520] kauditd_printk_skb: 199 callbacks suppressed
	[ +12.104082] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.937410] systemd-fstab-generator[7670]: Ignoring "noauto" option for root device
	[  +5.093078] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.715790] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.014425] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.397918] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.664033] kauditd_printk_skb: 11 callbacks suppressed
	[Jul29 23:14] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.662391] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.265407] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [48f8cd99718b] <==
	{"level":"info","ts":"2024-07-29T23:12:30.314742Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T23:12:31.862123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T23:12:31.862305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T23:12:31.862356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-29T23:12:31.862392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T23:12:31.862414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T23:12:31.862443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T23:12:31.862462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T23:12:31.865524Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-753000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T23:12:31.865713Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:12:31.866245Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T23:12:31.866452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T23:12:31.866312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:12:31.870212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T23:12:31.870277Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T23:12:59.078056Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T23:12:59.078086Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-753000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-29T23:12:59.078142Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T23:12:59.078189Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T23:12:59.085017Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T23:12:59.085046Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T23:12:59.085068Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-29T23:12:59.090432Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T23:12:59.090495Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T23:12:59.090503Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-753000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [ed84fe0b3c80] <==
	{"level":"info","ts":"2024-07-29T23:13:13.616702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-29T23:13:13.624269Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-29T23:13:13.624373Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:13:13.62443Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:13:13.625036Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T23:13:13.625892Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T23:13:13.613955Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T23:13:13.626029Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T23:13:13.626042Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T23:13:14.995134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T23:13:14.995273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T23:13:14.995335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T23:13:14.995763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T23:13:14.995813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T23:13:14.995843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T23:13:14.995866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T23:13:14.99819Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-753000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T23:13:14.998266Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:13:14.99894Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T23:13:14.999156Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T23:13:14.999352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:13:15.003291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T23:13:15.003548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-29T23:13:55.129918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.053174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8736"}
	{"level":"info","ts":"2024-07-29T23:13:55.129958Z","caller":"traceutil/trace.go:171","msg":"trace[1858443988] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:683; }","duration":"141.104362ms","start":"2024-07-29T23:13:54.988847Z","end":"2024-07-29T23:13:55.129951Z","steps":["trace[1858443988] 'range keys from in-memory index tree'  (duration: 140.690865ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:14:24 up 2 min,  0 users,  load average: 0.56, 0.28, 0.11
	Linux functional-753000 5.10.207 #1 SMP PREEMPT Tue Jul 23 01:19:38 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ea653298043d] <==
	I0729 23:13:15.633267       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 23:13:15.633340       1 aggregator.go:165] initial CRD sync complete...
	I0729 23:13:15.633372       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 23:13:15.633389       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 23:13:15.633405       1 cache.go:39] Caches are synced for autoregister controller
	I0729 23:13:15.633671       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 23:13:15.636028       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 23:13:15.636261       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 23:13:15.642015       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 23:13:15.655818       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 23:13:15.655826       1 policy_source.go:224] refreshing policies
	I0729 23:13:15.670846       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 23:13:16.533170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 23:13:17.014421       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 23:13:17.018279       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 23:13:17.029298       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 23:13:17.037028       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 23:13:17.039849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 23:13:28.122142       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 23:13:28.322279       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 23:13:36.351803       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.35.225"}
	I0729 23:13:42.063194       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.71.104"}
	I0729 23:13:52.424757       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 23:13:52.472129       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.169.114"}
	I0729 23:14:11.813110       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.225.100"}
	
	
	==> kube-controller-manager [6df894180cc9] <==
	I0729 23:12:44.648040       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 23:12:44.650260       1 shared_informer.go:320] Caches are synced for deployment
	I0729 23:12:44.650296       1 shared_informer.go:320] Caches are synced for HPA
	I0729 23:12:44.651394       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 23:12:44.652465       1 shared_informer.go:320] Caches are synced for expand
	I0729 23:12:44.660672       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 23:12:44.661739       1 shared_informer.go:320] Caches are synced for GC
	I0729 23:12:44.661761       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 23:12:44.662922       1 shared_informer.go:320] Caches are synced for TTL
	I0729 23:12:44.662957       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 23:12:44.663018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.574µs"
	I0729 23:12:44.664037       1 shared_informer.go:320] Caches are synced for taint
	I0729 23:12:44.664091       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 23:12:44.664123       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-753000"
	I0729 23:12:44.664275       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 23:12:44.725657       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 23:12:44.732512       1 shared_informer.go:320] Caches are synced for disruption
	I0729 23:12:44.844446       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 23:12:44.845596       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 23:12:44.884168       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 23:12:44.914939       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 23:12:44.921360       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 23:12:45.294982       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 23:12:45.319089       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 23:12:45.319103       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [bf48719bc85b] <==
	I0729 23:13:28.196994       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 23:13:28.220348       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 23:13:28.267744       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 23:13:28.270546       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 23:13:28.273403       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 23:13:28.300320       1 shared_informer.go:320] Caches are synced for disruption
	I0729 23:13:28.323650       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 23:13:28.728945       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 23:13:28.785582       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 23:13:28.785591       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 23:13:52.448795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="21.458537ms"
	I0729 23:13:52.455600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="6.305432ms"
	I0729 23:13:52.455712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="52.353µs"
	I0729 23:13:52.463814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="17.201µs"
	I0729 23:13:58.245196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="20.575µs"
	I0729 23:13:59.262375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.74µs"
	I0729 23:14:00.270686       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.157µs"
	I0729 23:14:11.783049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="10.774553ms"
	I0729 23:14:11.789837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="6.732357ms"
	I0729 23:14:11.790106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="37.943µs"
	I0729 23:14:11.790270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="28.905µs"
	I0729 23:14:12.333004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.449µs"
	I0729 23:14:12.340891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="17.826µs"
	I0729 23:14:13.351279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.699µs"
	I0729 23:14:23.954851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="26.573µs"
	
	
	==> kube-proxy [86c2c179b2d0] <==
	I0729 23:13:16.424014       1 server_linux.go:69] "Using iptables proxy"
	I0729 23:13:16.432011       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 23:13:16.445790       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 23:13:16.445812       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 23:13:16.445822       1 server_linux.go:165] "Using iptables Proxier"
	I0729 23:13:16.446447       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 23:13:16.446523       1 server.go:872] "Version info" version="v1.30.3"
	I0729 23:13:16.446529       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 23:13:16.447029       1 config.go:192] "Starting service config controller"
	I0729 23:13:16.447032       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 23:13:16.447041       1 config.go:101] "Starting endpoint slice config controller"
	I0729 23:13:16.447043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 23:13:16.447189       1 config.go:319] "Starting node config controller"
	I0729 23:13:16.447191       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 23:13:16.547599       1 shared_informer.go:320] Caches are synced for node config
	I0729 23:13:16.547669       1 shared_informer.go:320] Caches are synced for service config
	I0729 23:13:16.547675       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d7d6a60fef80] <==
	I0729 23:12:33.137412       1 server_linux.go:69] "Using iptables proxy"
	I0729 23:12:33.143279       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 23:12:33.154658       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 23:12:33.154674       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 23:12:33.154682       1 server_linux.go:165] "Using iptables Proxier"
	I0729 23:12:33.155263       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 23:12:33.155411       1 server.go:872] "Version info" version="v1.30.3"
	I0729 23:12:33.155422       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 23:12:33.155977       1 config.go:192] "Starting service config controller"
	I0729 23:12:33.156004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 23:12:33.156015       1 config.go:101] "Starting endpoint slice config controller"
	I0729 23:12:33.156016       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 23:12:33.156934       1 config.go:319] "Starting node config controller"
	I0729 23:12:33.156938       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 23:12:33.256117       1 shared_informer.go:320] Caches are synced for service config
	I0729 23:12:33.256116       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 23:12:33.257013       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ce2150b8c84] <==
	E0729 23:12:32.433989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 23:12:32.436757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 23:12:32.436824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 23:12:32.436875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 23:12:32.436896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 23:12:32.436939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 23:12:32.436960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 23:12:32.436989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 23:12:32.437019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 23:12:32.437052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 23:12:32.437075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 23:12:32.437103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 23:12:32.437116       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 23:12:32.437132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 23:12:32.437086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 23:12:32.437189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 23:12:32.437198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 23:12:32.437191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 23:12:32.437177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 23:12:32.437214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 23:12:32.437161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 23:12:32.437278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 23:12:32.437298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0729 23:12:32.533280       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 23:12:59.084323       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f4a9c4ababd8] <==
	I0729 23:13:14.251822       1 serving.go:380] Generated self-signed cert in-memory
	W0729 23:13:15.556487       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 23:13:15.556527       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 23:13:15.556548       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 23:13:15.556556       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 23:13:15.570420       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 23:13:15.570510       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 23:13:15.571263       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 23:13:15.571319       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 23:13:15.571359       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 23:13:15.571381       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 23:13:15.672105       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 23:14:12 functional-753000 kubelet[6645]: I0729 23:14:12.328493    6645 scope.go:117] "RemoveContainer" containerID="a3b7a4cbeb5c38d7f47f2fc4dc7a6a5afe7a391d2937880820e957b4dec962e8"
	Jul 29 23:14:12 functional-753000 kubelet[6645]: I0729 23:14:12.328648    6645 scope.go:117] "RemoveContainer" containerID="fe2a64ec60da8c909ce4632f2a6ac07fa639348515a5accd8ef78cc103f4b89f"
	Jul 29 23:14:12 functional-753000 kubelet[6645]: E0729 23:14:12.328726    6645 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-x4s9d_default(38bee33c-b50e-47c8-80e1-f22b5a0ff484)\"" pod="default/hello-node-connect-6f49f58cd5-x4s9d" podUID="38bee33c-b50e-47c8-80e1-f22b5a0ff484"
	Jul 29 23:14:12 functional-753000 kubelet[6645]: I0729 23:14:12.336295    6645 scope.go:117] "RemoveContainer" containerID="1a84194aa08ea5253723e240be5c722543263afe5fefffb7c44a3bffd5599ce7"
	Jul 29 23:14:12 functional-753000 kubelet[6645]: E0729 23:14:12.948458    6645 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 23:14:12 functional-753000 kubelet[6645]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 23:14:12 functional-753000 kubelet[6645]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 23:14:12 functional-753000 kubelet[6645]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 23:14:12 functional-753000 kubelet[6645]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 23:14:12 functional-753000 kubelet[6645]: I0729 23:14:12.996556    6645 scope.go:117] "RemoveContainer" containerID="54add683f3b497a9ae0b12927c17dc03e924bbbd561b2b156aa4c41ff6928a60"
	Jul 29 23:14:13 functional-753000 kubelet[6645]: I0729 23:14:13.002433    6645 scope.go:117] "RemoveContainer" containerID="56da5c9c14690ce7bcb65fa29e07e83d7f7c77284b988985c9e71a5392d5c008"
	Jul 29 23:14:13 functional-753000 kubelet[6645]: I0729 23:14:13.008522    6645 scope.go:117] "RemoveContainer" containerID="1a84194aa08ea5253723e240be5c722543263afe5fefffb7c44a3bffd5599ce7"
	Jul 29 23:14:13 functional-753000 kubelet[6645]: I0729 23:14:13.346422    6645 scope.go:117] "RemoveContainer" containerID="a04666938a406199a6a75f3f4c9d93839b97d775d5d2b8d0b4e745ff31e04020"
	Jul 29 23:14:13 functional-753000 kubelet[6645]: E0729 23:14:13.346510    6645 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-qwt2w_default(24923419-353a-4823-8e45-8fc069d43997)\"" pod="default/hello-node-65f5d5cc78-qwt2w" podUID="24923419-353a-4823-8e45-8fc069d43997"
	Jul 29 23:14:19 functional-753000 kubelet[6645]: I0729 23:14:19.720930    6645 topology_manager.go:215] "Topology Admit Handler" podUID="e957466e-41c7-4f7d-82c2-451e8366802e" podNamespace="default" podName="busybox-mount"
	Jul 29 23:14:19 functional-753000 kubelet[6645]: I0729 23:14:19.743252    6645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e957466e-41c7-4f7d-82c2-451e8366802e-test-volume\") pod \"busybox-mount\" (UID: \"e957466e-41c7-4f7d-82c2-451e8366802e\") " pod="default/busybox-mount"
	Jul 29 23:14:19 functional-753000 kubelet[6645]: I0729 23:14:19.743277    6645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wz8\" (UniqueName: \"kubernetes.io/projected/e957466e-41c7-4f7d-82c2-451e8366802e-kube-api-access-b4wz8\") pod \"busybox-mount\" (UID: \"e957466e-41c7-4f7d-82c2-451e8366802e\") " pod="default/busybox-mount"
	Jul 29 23:14:23 functional-753000 kubelet[6645]: I0729 23:14:23.565689    6645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4wz8\" (UniqueName: \"kubernetes.io/projected/e957466e-41c7-4f7d-82c2-451e8366802e-kube-api-access-b4wz8\") pod \"e957466e-41c7-4f7d-82c2-451e8366802e\" (UID: \"e957466e-41c7-4f7d-82c2-451e8366802e\") "
	Jul 29 23:14:23 functional-753000 kubelet[6645]: I0729 23:14:23.565717    6645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e957466e-41c7-4f7d-82c2-451e8366802e-test-volume\") pod \"e957466e-41c7-4f7d-82c2-451e8366802e\" (UID: \"e957466e-41c7-4f7d-82c2-451e8366802e\") "
	Jul 29 23:14:23 functional-753000 kubelet[6645]: I0729 23:14:23.565762    6645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e957466e-41c7-4f7d-82c2-451e8366802e-test-volume" (OuterVolumeSpecName: "test-volume") pod "e957466e-41c7-4f7d-82c2-451e8366802e" (UID: "e957466e-41c7-4f7d-82c2-451e8366802e"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 29 23:14:23 functional-753000 kubelet[6645]: I0729 23:14:23.568513    6645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e957466e-41c7-4f7d-82c2-451e8366802e-kube-api-access-b4wz8" (OuterVolumeSpecName: "kube-api-access-b4wz8") pod "e957466e-41c7-4f7d-82c2-451e8366802e" (UID: "e957466e-41c7-4f7d-82c2-451e8366802e"). InnerVolumeSpecName "kube-api-access-b4wz8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 23:14:23 functional-753000 kubelet[6645]: I0729 23:14:23.665825    6645 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b4wz8\" (UniqueName: \"kubernetes.io/projected/e957466e-41c7-4f7d-82c2-451e8366802e-kube-api-access-b4wz8\") on node \"functional-753000\" DevicePath \"\""
	Jul 29 23:14:23 functional-753000 kubelet[6645]: I0729 23:14:23.665846    6645 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e957466e-41c7-4f7d-82c2-451e8366802e-test-volume\") on node \"functional-753000\" DevicePath \"\""
	Jul 29 23:14:23 functional-753000 kubelet[6645]: I0729 23:14:23.944000    6645 scope.go:117] "RemoveContainer" containerID="fe2a64ec60da8c909ce4632f2a6ac07fa639348515a5accd8ef78cc103f4b89f"
	Jul 29 23:14:23 functional-753000 kubelet[6645]: E0729 23:14:23.944156    6645 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-x4s9d_default(38bee33c-b50e-47c8-80e1-f22b5a0ff484)\"" pod="default/hello-node-connect-6f49f58cd5-x4s9d" podUID="38bee33c-b50e-47c8-80e1-f22b5a0ff484"
	
	
	==> storage-provisioner [26aace26f0e4] <==
	I0729 23:13:16.415995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 23:13:16.420346       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 23:13:16.420428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 23:13:33.806205       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 23:13:33.806291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e1f84ab-a535-484e-a4ac-7fee8f55ae04", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-753000_ac01cb34-004c-438a-8be2-9cb5478a5c0f became leader
	I0729 23:13:33.806371       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-753000_ac01cb34-004c-438a-8be2-9cb5478a5c0f!
	I0729 23:13:33.907035       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-753000_ac01cb34-004c-438a-8be2-9cb5478a5c0f!
	I0729 23:13:49.894289       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0729 23:13:49.894322       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d1dd19e8-37ee-4df9-bd18-a4a909db6009 349 0 2024-07-29 23:12:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-29 23:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-8b73cb21-01c2-447f-8715-31df369994e5 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  8b73cb21-01c2-447f-8715-31df369994e5 649 0 2024-07-29 23:13:49 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-29 23:13:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-29 23:13:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0729 23:13:49.894826       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-8b73cb21-01c2-447f-8715-31df369994e5" provisioned
	I0729 23:13:49.894846       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0729 23:13:49.895456       1 volume_store.go:212] Trying to save persistentvolume "pvc-8b73cb21-01c2-447f-8715-31df369994e5"
	I0729 23:13:49.894901       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8b73cb21-01c2-447f-8715-31df369994e5", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0729 23:13:49.899480       1 volume_store.go:219] persistentvolume "pvc-8b73cb21-01c2-447f-8715-31df369994e5" saved
	I0729 23:13:49.899639       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8b73cb21-01c2-447f-8715-31df369994e5", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-8b73cb21-01c2-447f-8715-31df369994e5
	
	
	==> storage-provisioner [a3bd1ff5e5ff] <==
	I0729 23:12:33.102167       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 23:12:33.124606       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 23:12:33.124659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 23:12:50.510711       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 23:12:50.511898       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e1f84ab-a535-484e-a4ac-7fee8f55ae04", APIVersion:"v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-753000_c4240419-1d4d-44ba-916a-d2328db16bf6 became leader
	I0729 23:12:50.511913       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-753000_c4240419-1d4d-44ba-916a-d2328db16bf6!
	I0729 23:12:50.612896       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-753000_c4240419-1d4d-44ba-916a-d2328db16bf6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-753000 -n functional-753000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-753000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-753000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-753000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-753000/192.168.105.4
	Start Time:       Mon, 29 Jul 2024 16:14:19 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://5e41fd095e26b9e33634ab1689bce2a635f5f4877cee7da11da5926419a062f8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Jul 2024 16:14:21 -0700
	      Finished:     Mon, 29 Jul 2024 16:14:21 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4wz8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b4wz8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/busybox-mount to functional-753000
	  Normal  Pulling    4s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.274s (1.274s including waiting). Image size: 3547125 bytes.
	  Normal  Created    3s    kubelet            Created container mount-munger
	  Normal  Started    3s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 node stop m02 -v=7 --alsologtostderr
E0729 16:18:52.172042    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:19:02.414055    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-291000 node stop m02 -v=7 --alsologtostderr: (12.202251917s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr
E0729 16:19:22.895959    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:20:03.857452    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:21:25.778197    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr: exit status 7 (2m55.963546958s)

                                                
                                                
-- stdout --
	ha-291000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-291000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-291000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:19:04.022533    2643 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:19:04.022704    2643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:19:04.022707    2643 out.go:304] Setting ErrFile to fd 2...
	I0729 16:19:04.022709    2643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:19:04.022853    2643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:19:04.022977    2643 out.go:298] Setting JSON to false
	I0729 16:19:04.022994    2643 mustload.go:65] Loading cluster: ha-291000
	I0729 16:19:04.023034    2643 notify.go:220] Checking for updates...
	I0729 16:19:04.023232    2643 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:19:04.023240    2643 status.go:255] checking status of ha-291000 ...
	I0729 16:19:04.023926    2643 status.go:330] ha-291000 host status = "Running" (err=<nil>)
	I0729 16:19:04.023935    2643 host.go:66] Checking if "ha-291000" exists ...
	I0729 16:19:04.024035    2643 host.go:66] Checking if "ha-291000" exists ...
	I0729 16:19:04.024151    2643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:19:04.024160    2643 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/id_rsa Username:docker}
	W0729 16:19:29.944889    2643 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0729 16:19:29.945024    2643 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:19:29.945046    2643 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:19:29.945055    2643 status.go:257] ha-291000 status: &{Name:ha-291000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:19:29.945077    2643 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:19:29.945086    2643 status.go:255] checking status of ha-291000-m02 ...
	I0729 16:19:29.945522    2643 status.go:330] ha-291000-m02 host status = "Stopped" (err=<nil>)
	I0729 16:19:29.945548    2643 status.go:343] host is not running, skipping remaining checks
	I0729 16:19:29.945554    2643 status.go:257] ha-291000-m02 status: &{Name:ha-291000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:19:29.945567    2643 status.go:255] checking status of ha-291000-m03 ...
	I0729 16:19:29.946723    2643 status.go:330] ha-291000-m03 host status = "Running" (err=<nil>)
	I0729 16:19:29.946733    2643 host.go:66] Checking if "ha-291000-m03" exists ...
	I0729 16:19:29.946938    2643 host.go:66] Checking if "ha-291000-m03" exists ...
	I0729 16:19:29.947212    2643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:19:29.947226    2643 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m03/id_rsa Username:docker}
	W0729 16:20:44.947844    2643 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 16:20:44.947902    2643 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 16:20:44.947912    2643 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:20:44.947916    2643 status.go:257] ha-291000-m03 status: &{Name:ha-291000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:20:44.947929    2643 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:20:44.947933    2643 status.go:255] checking status of ha-291000-m04 ...
	I0729 16:20:44.948717    2643 status.go:330] ha-291000-m04 host status = "Running" (err=<nil>)
	I0729 16:20:44.948725    2643 host.go:66] Checking if "ha-291000-m04" exists ...
	I0729 16:20:44.948831    2643 host.go:66] Checking if "ha-291000-m04" exists ...
	I0729 16:20:44.948951    2643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:20:44.948958    2643 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m04/id_rsa Username:docker}
	W0729 16:21:59.949405    2643 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 16:21:59.949455    2643 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 16:21:59.949463    2643 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 16:21:59.949468    2643 status.go:257] ha-291000-m04 status: &{Name:ha-291000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:21:59.949479    2643 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr": ha-291000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-291000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-291000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr": ha-291000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-291000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-291000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr": ha-291000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-291000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-291000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
E0729 16:22:01.317232    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 3 (25.958509666s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 16:22:25.907949    2691 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:22:25.907959    2691 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0729 16:23:41.914596    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.039607833s)
ha_test.go:413: expected profile "ha-291000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-291000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-291000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-291000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
E0729 16:24:09.618150    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 3 (25.964125459s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 16:24:09.906842    2713 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:24:09.906889    2713 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-291000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.112104459s)

                                                
                                                
-- stdout --
	* Starting "ha-291000-m02" control-plane node in "ha-291000" cluster
	* Restarting existing qemu2 VM for "ha-291000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-291000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:24:09.971760    2717 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:24:09.972074    2717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:24:09.972079    2717 out.go:304] Setting ErrFile to fd 2...
	I0729 16:24:09.972082    2717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:24:09.972261    2717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:24:09.972582    2717 mustload.go:65] Loading cluster: ha-291000
	I0729 16:24:09.972883    2717 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 16:24:09.973184    2717 host.go:58] "ha-291000-m02" host status: Stopped
	I0729 16:24:09.977671    2717 out.go:177] * Starting "ha-291000-m02" control-plane node in "ha-291000" cluster
	I0729 16:24:09.981594    2717 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:24:09.981618    2717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:24:09.981627    2717 cache.go:56] Caching tarball of preloaded images
	I0729 16:24:09.981715    2717 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:24:09.981723    2717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:24:09.981796    2717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/ha-291000/config.json ...
	I0729 16:24:09.982145    2717 start.go:360] acquireMachinesLock for ha-291000-m02: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:24:09.982199    2717 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "ha-291000-m02"
	I0729 16:24:09.982211    2717 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:24:09.982221    2717 fix.go:54] fixHost starting: m02
	I0729 16:24:09.982410    2717 fix.go:112] recreateIfNeeded on ha-291000-m02: state=Stopped err=<nil>
	W0729 16:24:09.982419    2717 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:24:09.986613    2717 out.go:177] * Restarting existing qemu2 VM for "ha-291000-m02" ...
	I0729 16:24:09.989551    2717 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:24:09.989600    2717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4e:b9:5c:64:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/disk.qcow2
	I0729 16:24:09.992800    2717 main.go:141] libmachine: STDOUT: 
	I0729 16:24:09.992826    2717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:24:09.992860    2717 fix.go:56] duration metric: took 10.639125ms for fixHost
	I0729 16:24:09.992864    2717 start.go:83] releasing machines lock for "ha-291000-m02", held for 10.660042ms
	W0729 16:24:09.992871    2717 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:24:09.992907    2717 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:24:09.992912    2717 start.go:729] Will try again in 5 seconds ...
	I0729 16:24:14.994893    2717 start.go:360] acquireMachinesLock for ha-291000-m02: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:24:14.995017    2717 start.go:364] duration metric: took 103.75µs to acquireMachinesLock for "ha-291000-m02"
	I0729 16:24:14.995053    2717 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:24:14.995066    2717 fix.go:54] fixHost starting: m02
	I0729 16:24:14.995238    2717 fix.go:112] recreateIfNeeded on ha-291000-m02: state=Stopped err=<nil>
	W0729 16:24:14.995246    2717 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:24:14.999217    2717 out.go:177] * Restarting existing qemu2 VM for "ha-291000-m02" ...
	I0729 16:24:15.003169    2717 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:24:15.003218    2717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4e:b9:5c:64:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/disk.qcow2
	I0729 16:24:15.005471    2717 main.go:141] libmachine: STDOUT: 
	I0729 16:24:15.005490    2717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:24:15.005511    2717 fix.go:56] duration metric: took 10.453166ms for fixHost
	I0729 16:24:15.005515    2717 start.go:83] releasing machines lock for "ha-291000-m02", held for 10.491833ms
	W0729 16:24:15.005584    2717 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:24:15.009119    2717 out.go:177] 
	W0729 16:24:15.013189    2717 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:24:15.013204    2717 out.go:239] * 
	* 
	W0729 16:24:15.014978    2717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:24:15.019162    2717 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 16:24:09.971760    2717 out.go:291] Setting OutFile to fd 1 ...
I0729 16:24:09.972074    2717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:24:09.972079    2717 out.go:304] Setting ErrFile to fd 2...
I0729 16:24:09.972082    2717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:24:09.972261    2717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
I0729 16:24:09.972582    2717 mustload.go:65] Loading cluster: ha-291000
I0729 16:24:09.972883    2717 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0729 16:24:09.973184    2717 host.go:58] "ha-291000-m02" host status: Stopped
I0729 16:24:09.977671    2717 out.go:177] * Starting "ha-291000-m02" control-plane node in "ha-291000" cluster
I0729 16:24:09.981594    2717 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 16:24:09.981618    2717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 16:24:09.981627    2717 cache.go:56] Caching tarball of preloaded images
I0729 16:24:09.981715    2717 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 16:24:09.981723    2717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 16:24:09.981796    2717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/ha-291000/config.json ...
I0729 16:24:09.982145    2717 start.go:360] acquireMachinesLock for ha-291000-m02: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:24:09.982199    2717 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "ha-291000-m02"
I0729 16:24:09.982211    2717 start.go:96] Skipping create...Using existing machine configuration
I0729 16:24:09.982221    2717 fix.go:54] fixHost starting: m02
I0729 16:24:09.982410    2717 fix.go:112] recreateIfNeeded on ha-291000-m02: state=Stopped err=<nil>
W0729 16:24:09.982419    2717 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:24:09.986613    2717 out.go:177] * Restarting existing qemu2 VM for "ha-291000-m02" ...
I0729 16:24:09.989551    2717 qemu.go:418] Using hvf for hardware acceleration
I0729 16:24:09.989600    2717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4e:b9:5c:64:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/disk.qcow2
I0729 16:24:09.992800    2717 main.go:141] libmachine: STDOUT: 
I0729 16:24:09.992826    2717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:24:09.992860    2717 fix.go:56] duration metric: took 10.639125ms for fixHost
I0729 16:24:09.992864    2717 start.go:83] releasing machines lock for "ha-291000-m02", held for 10.660042ms
W0729 16:24:09.992871    2717 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:24:09.992907    2717 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:24:09.992912    2717 start.go:729] Will try again in 5 seconds ...
I0729 16:24:14.994893    2717 start.go:360] acquireMachinesLock for ha-291000-m02: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:24:14.995017    2717 start.go:364] duration metric: took 103.75µs to acquireMachinesLock for "ha-291000-m02"
I0729 16:24:14.995053    2717 start.go:96] Skipping create...Using existing machine configuration
I0729 16:24:14.995066    2717 fix.go:54] fixHost starting: m02
I0729 16:24:14.995238    2717 fix.go:112] recreateIfNeeded on ha-291000-m02: state=Stopped err=<nil>
W0729 16:24:14.995246    2717 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:24:14.999217    2717 out.go:177] * Restarting existing qemu2 VM for "ha-291000-m02" ...
I0729 16:24:15.003169    2717 qemu.go:418] Using hvf for hardware acceleration
I0729 16:24:15.003218    2717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4e:b9:5c:64:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m02/disk.qcow2
I0729 16:24:15.005471    2717 main.go:141] libmachine: STDOUT: 
I0729 16:24:15.005490    2717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:24:15.005511    2717 fix.go:56] duration metric: took 10.453166ms for fixHost
I0729 16:24:15.005515    2717 start.go:83] releasing machines lock for "ha-291000-m02", held for 10.491833ms
W0729 16:24:15.005584    2717 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:24:15.009119    2717 out.go:177] 
W0729 16:24:15.013189    2717 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:24:15.013204    2717 out.go:239] * 
* 
W0729 16:24:15.014978    2717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:24:15.019162    2717 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-291000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr
E0729 16:27:01.311984    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr: exit status 7 (2m58.162284375s)

                                                
                                                
-- stdout --
	ha-291000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-291000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-291000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:24:15.054495    2721 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:24:15.054684    2721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:24:15.054688    2721 out.go:304] Setting ErrFile to fd 2...
	I0729 16:24:15.054690    2721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:24:15.054839    2721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:24:15.054976    2721 out.go:298] Setting JSON to false
	I0729 16:24:15.054988    2721 mustload.go:65] Loading cluster: ha-291000
	I0729 16:24:15.055026    2721 notify.go:220] Checking for updates...
	I0729 16:24:15.055197    2721 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:24:15.055205    2721 status.go:255] checking status of ha-291000 ...
	I0729 16:24:15.055895    2721 status.go:330] ha-291000 host status = "Running" (err=<nil>)
	I0729 16:24:15.055904    2721 host.go:66] Checking if "ha-291000" exists ...
	I0729 16:24:15.055997    2721 host.go:66] Checking if "ha-291000" exists ...
	I0729 16:24:15.056121    2721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:24:15.056129    2721 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/id_rsa Username:docker}
	W0729 16:24:15.056304    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 16:24:15.056323    2721 retry.go:31] will retry after 150.309001ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 16:24:15.208832    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 16:24:15.208854    2721 retry.go:31] will retry after 274.032708ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 16:24:15.485058    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 16:24:15.485080    2721 retry.go:31] will retry after 631.902515ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 16:24:16.119155    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 16:24:16.119176    2721 retry.go:31] will retry after 515.652173ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 16:24:16.637043    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 16:24:16.637115    2721 retry.go:31] will retry after 316.055383ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0729 16:24:16.954905    2721 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/id_rsa Username:docker}
	W0729 16:24:16.955202    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 16:24:16.955214    2721 retry.go:31] will retry after 285.405314ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 16:24:43.168653    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0729 16:24:43.168708    2721 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:24:43.168738    2721 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:24:43.168752    2721 status.go:257] ha-291000 status: &{Name:ha-291000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:24:43.168764    2721 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:24:43.168768    2721 status.go:255] checking status of ha-291000-m02 ...
	I0729 16:24:43.169013    2721 status.go:330] ha-291000-m02 host status = "Stopped" (err=<nil>)
	I0729 16:24:43.169019    2721 status.go:343] host is not running, skipping remaining checks
	I0729 16:24:43.169021    2721 status.go:257] ha-291000-m02 status: &{Name:ha-291000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:24:43.169025    2721 status.go:255] checking status of ha-291000-m03 ...
	I0729 16:24:43.169669    2721 status.go:330] ha-291000-m03 host status = "Running" (err=<nil>)
	I0729 16:24:43.169678    2721 host.go:66] Checking if "ha-291000-m03" exists ...
	I0729 16:24:43.169799    2721 host.go:66] Checking if "ha-291000-m03" exists ...
	I0729 16:24:43.169922    2721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:24:43.169928    2721 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m03/id_rsa Username:docker}
	W0729 16:25:58.170762    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 16:25:58.171033    2721 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 16:25:58.171080    2721 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:25:58.171101    2721 status.go:257] ha-291000-m03 status: &{Name:ha-291000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:25:58.171148    2721 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:25:58.171167    2721 status.go:255] checking status of ha-291000-m04 ...
	I0729 16:25:58.174308    2721 status.go:330] ha-291000-m04 host status = "Running" (err=<nil>)
	I0729 16:25:58.174336    2721 host.go:66] Checking if "ha-291000-m04" exists ...
	I0729 16:25:58.174854    2721 host.go:66] Checking if "ha-291000-m04" exists ...
	I0729 16:25:58.175434    2721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:25:58.175464    2721 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000-m04/id_rsa Username:docker}
	W0729 16:27:13.176987    2721 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 16:27:13.177179    2721 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 16:27:13.177216    2721 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 16:27:13.177235    2721 status.go:257] ha-291000-m04 status: &{Name:ha-291000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:27:13.177278    2721 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 3 (25.997815834s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 16:27:39.176318    2871 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:27:39.176373    2871 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-291000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-291000 -v=7 --alsologtostderr
E0729 16:32:01.322846    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-291000 -v=7 --alsologtostderr: (3m49.021832625s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-291000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-291000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.230359958s)

                                                
                                                
-- stdout --
	* [ha-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-291000" primary control-plane node in "ha-291000" cluster
	* Restarting existing qemu2 VM for "ha-291000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-291000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:32:48.570252    3391 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:32:48.570448    3391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:48.570452    3391 out.go:304] Setting ErrFile to fd 2...
	I0729 16:32:48.570455    3391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:48.570616    3391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:32:48.571945    3391 out.go:298] Setting JSON to false
	I0729 16:32:48.592382    3391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1931,"bootTime":1722294037,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:32:48.592449    3391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:32:48.597636    3391 out.go:177] * [ha-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:32:48.605506    3391 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:32:48.605567    3391 notify.go:220] Checking for updates...
	I0729 16:32:48.612460    3391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:32:48.615463    3391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:32:48.618414    3391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:32:48.621478    3391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:32:48.624513    3391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:32:48.627783    3391 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:32:48.627839    3391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:32:48.632445    3391 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:32:48.638419    3391 start.go:297] selected driver: qemu2
	I0729 16:32:48.638429    3391 start.go:901] validating driver "qemu2" against &{Name:ha-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-291000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:32:48.638510    3391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:32:48.641062    3391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:32:48.641088    3391 cni.go:84] Creating CNI manager for ""
	I0729 16:32:48.641093    3391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 16:32:48.641146    3391 start.go:340] cluster config:
	{Name:ha-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-291000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:32:48.645043    3391 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:32:48.653395    3391 out.go:177] * Starting "ha-291000" primary control-plane node in "ha-291000" cluster
	I0729 16:32:48.657443    3391 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:32:48.657462    3391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:32:48.657472    3391 cache.go:56] Caching tarball of preloaded images
	I0729 16:32:48.657533    3391 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:32:48.657540    3391 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:32:48.657617    3391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/ha-291000/config.json ...
	I0729 16:32:48.658049    3391 start.go:360] acquireMachinesLock for ha-291000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:48.658084    3391 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "ha-291000"
	I0729 16:32:48.658094    3391 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:32:48.658100    3391 fix.go:54] fixHost starting: 
	I0729 16:32:48.658215    3391 fix.go:112] recreateIfNeeded on ha-291000: state=Stopped err=<nil>
	W0729 16:32:48.658224    3391 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:32:48.662509    3391 out.go:177] * Restarting existing qemu2 VM for "ha-291000" ...
	I0729 16:32:48.670422    3391 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:48.670464    3391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4e:f8:27:01:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/disk.qcow2
	I0729 16:32:48.672588    3391 main.go:141] libmachine: STDOUT: 
	I0729 16:32:48.672606    3391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:48.672632    3391 fix.go:56] duration metric: took 14.532708ms for fixHost
	I0729 16:32:48.672636    3391 start.go:83] releasing machines lock for "ha-291000", held for 14.54775ms
	W0729 16:32:48.672643    3391 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:32:48.672679    3391 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:48.672684    3391 start.go:729] Will try again in 5 seconds ...
	I0729 16:32:53.674761    3391 start.go:360] acquireMachinesLock for ha-291000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:53.675126    3391 start.go:364] duration metric: took 275.875µs to acquireMachinesLock for "ha-291000"
	I0729 16:32:53.675623    3391 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:32:53.675646    3391 fix.go:54] fixHost starting: 
	I0729 16:32:53.676312    3391 fix.go:112] recreateIfNeeded on ha-291000: state=Stopped err=<nil>
	W0729 16:32:53.676341    3391 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:32:53.684690    3391 out.go:177] * Restarting existing qemu2 VM for "ha-291000" ...
	I0729 16:32:53.688736    3391 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:53.688981    3391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4e:f8:27:01:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/disk.qcow2
	I0729 16:32:53.697793    3391 main.go:141] libmachine: STDOUT: 
	I0729 16:32:53.697848    3391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:53.697908    3391 fix.go:56] duration metric: took 22.26725ms for fixHost
	I0729 16:32:53.697924    3391 start.go:83] releasing machines lock for "ha-291000", held for 22.778584ms
	W0729 16:32:53.698080    3391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:53.706648    3391 out.go:177] 
	W0729 16:32:53.710783    3391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:32:53.710827    3391 out.go:239] * 
	* 
	W0729 16:32:53.713368    3391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:32:53.724661    3391 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-291000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-291000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 7 (32.428917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-291000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.144417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-291000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-291000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:32:53.862214    3404 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:32:53.862446    3404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:53.862449    3404 out.go:304] Setting ErrFile to fd 2...
	I0729 16:32:53.862452    3404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:53.862596    3404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:32:53.862818    3404 mustload.go:65] Loading cluster: ha-291000
	I0729 16:32:53.863023    3404 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 16:32:53.863363    3404 out.go:239] ! The control-plane node ha-291000 host is not running (will try others): state=Stopped
	! The control-plane node ha-291000 host is not running (will try others): state=Stopped
	W0729 16:32:53.863473    3404 out.go:239] ! The control-plane node ha-291000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-291000-m02 host is not running (will try others): state=Stopped
	I0729 16:32:53.867738    3404 out.go:177] * The control-plane node ha-291000-m03 host is not running: state=Stopped
	I0729 16:32:53.870756    3404 out.go:177]   To start a cluster, run: "minikube start -p ha-291000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-291000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr: exit status 7 (29.523459ms)

                                                
                                                
-- stdout --
	ha-291000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:32:53.902200    3406 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:32:53.902367    3406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:53.902371    3406 out.go:304] Setting ErrFile to fd 2...
	I0729 16:32:53.902373    3406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:53.902507    3406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:32:53.902628    3406 out.go:298] Setting JSON to false
	I0729 16:32:53.902637    3406 mustload.go:65] Loading cluster: ha-291000
	I0729 16:32:53.902704    3406 notify.go:220] Checking for updates...
	I0729 16:32:53.902858    3406 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:32:53.902865    3406 status.go:255] checking status of ha-291000 ...
	I0729 16:32:53.903061    3406 status.go:330] ha-291000 host status = "Stopped" (err=<nil>)
	I0729 16:32:53.903065    3406 status.go:343] host is not running, skipping remaining checks
	I0729 16:32:53.903067    3406 status.go:257] ha-291000 status: &{Name:ha-291000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:32:53.903076    3406 status.go:255] checking status of ha-291000-m02 ...
	I0729 16:32:53.903167    3406 status.go:330] ha-291000-m02 host status = "Stopped" (err=<nil>)
	I0729 16:32:53.903171    3406 status.go:343] host is not running, skipping remaining checks
	I0729 16:32:53.903173    3406 status.go:257] ha-291000-m02 status: &{Name:ha-291000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:32:53.903177    3406 status.go:255] checking status of ha-291000-m03 ...
	I0729 16:32:53.903269    3406 status.go:330] ha-291000-m03 host status = "Stopped" (err=<nil>)
	I0729 16:32:53.903271    3406 status.go:343] host is not running, skipping remaining checks
	I0729 16:32:53.903273    3406 status.go:257] ha-291000-m03 status: &{Name:ha-291000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:32:53.903277    3406 status.go:255] checking status of ha-291000-m04 ...
	I0729 16:32:53.903366    3406 status.go:330] ha-291000-m04 host status = "Stopped" (err=<nil>)
	I0729 16:32:53.903370    3406 status.go:343] host is not running, skipping remaining checks
	I0729 16:32:53.903371    3406 status.go:257] ha-291000-m04 status: &{Name:ha-291000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 7 (29.834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-291000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-291000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-291000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-291000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 7 (29.501333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 stop -v=7 --alsologtostderr
E0729 16:33:41.921179    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:35:04.986449    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-291000 stop -v=7 --alsologtostderr: (3m21.969819084s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr: exit status 7 (65.256625ms)

                                                
                                                
-- stdout --
	ha-291000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-291000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:36:16.041152    3510 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:36:16.041380    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:16.041384    3510 out.go:304] Setting ErrFile to fd 2...
	I0729 16:36:16.041387    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:16.041546    3510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:36:16.041717    3510 out.go:298] Setting JSON to false
	I0729 16:36:16.041730    3510 mustload.go:65] Loading cluster: ha-291000
	I0729 16:36:16.041781    3510 notify.go:220] Checking for updates...
	I0729 16:36:16.042083    3510 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:36:16.042100    3510 status.go:255] checking status of ha-291000 ...
	I0729 16:36:16.042396    3510 status.go:330] ha-291000 host status = "Stopped" (err=<nil>)
	I0729 16:36:16.042401    3510 status.go:343] host is not running, skipping remaining checks
	I0729 16:36:16.042404    3510 status.go:257] ha-291000 status: &{Name:ha-291000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:36:16.042417    3510 status.go:255] checking status of ha-291000-m02 ...
	I0729 16:36:16.042545    3510 status.go:330] ha-291000-m02 host status = "Stopped" (err=<nil>)
	I0729 16:36:16.042555    3510 status.go:343] host is not running, skipping remaining checks
	I0729 16:36:16.042558    3510 status.go:257] ha-291000-m02 status: &{Name:ha-291000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:36:16.042563    3510 status.go:255] checking status of ha-291000-m03 ...
	I0729 16:36:16.042687    3510 status.go:330] ha-291000-m03 host status = "Stopped" (err=<nil>)
	I0729 16:36:16.042691    3510 status.go:343] host is not running, skipping remaining checks
	I0729 16:36:16.042694    3510 status.go:257] ha-291000-m03 status: &{Name:ha-291000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:36:16.042701    3510 status.go:255] checking status of ha-291000-m04 ...
	I0729 16:36:16.042824    3510 status.go:330] ha-291000-m04 host status = "Stopped" (err=<nil>)
	I0729 16:36:16.042828    3510 status.go:343] host is not running, skipping remaining checks
	I0729 16:36:16.042831    3510 status.go:257] ha-291000-m04 status: &{Name:ha-291000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr": ha-291000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr": ha-291000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr": ha-291000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-291000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 7 (34.0625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-291000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-291000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182243333s)

                                                
                                                
-- stdout --
	* [ha-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-291000" primary control-plane node in "ha-291000" cluster
	* Restarting existing qemu2 VM for "ha-291000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-291000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:36:16.105516    3514 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:36:16.105643    3514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:16.105646    3514 out.go:304] Setting ErrFile to fd 2...
	I0729 16:36:16.105648    3514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:16.105782    3514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:36:16.106770    3514 out.go:298] Setting JSON to false
	I0729 16:36:16.122734    3514 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2139,"bootTime":1722294037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:36:16.122799    3514 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:36:16.127748    3514 out.go:177] * [ha-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:36:16.135803    3514 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:36:16.135848    3514 notify.go:220] Checking for updates...
	I0729 16:36:16.142757    3514 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:36:16.145687    3514 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:36:16.148728    3514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:36:16.151713    3514 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:36:16.154740    3514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:36:16.158090    3514 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:36:16.158374    3514 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:36:16.162665    3514 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:36:16.169750    3514 start.go:297] selected driver: qemu2
	I0729 16:36:16.169758    3514 start.go:901] validating driver "qemu2" against &{Name:ha-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-291000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:36:16.169836    3514 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:36:16.172084    3514 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:36:16.172127    3514 cni.go:84] Creating CNI manager for ""
	I0729 16:36:16.172132    3514 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 16:36:16.172179    3514 start.go:340] cluster config:
	{Name:ha-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-291000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:36:16.175923    3514 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:36:16.183654    3514 out.go:177] * Starting "ha-291000" primary control-plane node in "ha-291000" cluster
	I0729 16:36:16.187732    3514 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:36:16.187748    3514 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:36:16.187759    3514 cache.go:56] Caching tarball of preloaded images
	I0729 16:36:16.187828    3514 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:36:16.187835    3514 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:36:16.187907    3514 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/ha-291000/config.json ...
	I0729 16:36:16.188316    3514 start.go:360] acquireMachinesLock for ha-291000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:36:16.188350    3514 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "ha-291000"
	I0729 16:36:16.188360    3514 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:36:16.188366    3514 fix.go:54] fixHost starting: 
	I0729 16:36:16.188487    3514 fix.go:112] recreateIfNeeded on ha-291000: state=Stopped err=<nil>
	W0729 16:36:16.188495    3514 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:36:16.191716    3514 out.go:177] * Restarting existing qemu2 VM for "ha-291000" ...
	I0729 16:36:16.199620    3514 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:36:16.199657    3514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4e:f8:27:01:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/disk.qcow2
	I0729 16:36:16.201568    3514 main.go:141] libmachine: STDOUT: 
	I0729 16:36:16.201585    3514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:36:16.201613    3514 fix.go:56] duration metric: took 13.247542ms for fixHost
	I0729 16:36:16.201617    3514 start.go:83] releasing machines lock for "ha-291000", held for 13.26225ms
	W0729 16:36:16.201623    3514 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:36:16.201653    3514 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:36:16.201661    3514 start.go:729] Will try again in 5 seconds ...
	I0729 16:36:21.203316    3514 start.go:360] acquireMachinesLock for ha-291000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:36:21.203782    3514 start.go:364] duration metric: took 317.833µs to acquireMachinesLock for "ha-291000"
	I0729 16:36:21.204012    3514 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:36:21.204035    3514 fix.go:54] fixHost starting: 
	I0729 16:36:21.204817    3514 fix.go:112] recreateIfNeeded on ha-291000: state=Stopped err=<nil>
	W0729 16:36:21.204842    3514 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:36:21.213275    3514 out.go:177] * Restarting existing qemu2 VM for "ha-291000" ...
	I0729 16:36:21.217316    3514 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:36:21.217680    3514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4e:f8:27:01:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/ha-291000/disk.qcow2
	I0729 16:36:21.226731    3514 main.go:141] libmachine: STDOUT: 
	I0729 16:36:21.226784    3514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:36:21.226856    3514 fix.go:56] duration metric: took 22.824208ms for fixHost
	I0729 16:36:21.226872    3514 start.go:83] releasing machines lock for "ha-291000", held for 23.065ms
	W0729 16:36:21.227064    3514 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:36:21.234251    3514 out.go:177] 
	W0729 16:36:21.238350    3514 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:36:21.238372    3514 out.go:239] * 
	* 
	W0729 16:36:21.240825    3514 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:36:21.249160    3514 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-291000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 7 (64.894209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-291000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-291000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-291000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-291000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 7 (29.584583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-291000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-291000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.871ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-291000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-291000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:36:21.436887    3531 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:36:21.437051    3531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:21.437054    3531 out.go:304] Setting ErrFile to fd 2...
	I0729 16:36:21.437057    3531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:21.437272    3531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:36:21.437515    3531 mustload.go:65] Loading cluster: ha-291000
	I0729 16:36:21.437737    3531 config.go:182] Loaded profile config "ha-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 16:36:21.438049    3531 out.go:239] ! The control-plane node ha-291000 host is not running (will try others): state=Stopped
	! The control-plane node ha-291000 host is not running (will try others): state=Stopped
	W0729 16:36:21.438148    3531 out.go:239] ! The control-plane node ha-291000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-291000-m02 host is not running (will try others): state=Stopped
	I0729 16:36:21.441998    3531 out.go:177] * The control-plane node ha-291000-m03 host is not running: state=Stopped
	I0729 16:36:21.445968    3531 out.go:177]   To start a cluster, run: "minikube start -p ha-291000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-291000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-291000 -n ha-291000: exit status 7 (29.990167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-125000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-125000 --driver=qemu2 : exit status 80 (10.006578625s)

                                                
                                                
-- stdout --
	* [image-125000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-125000" primary control-plane node in "image-125000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-125000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-125000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-125000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-125000 -n image-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-125000 -n image-125000: exit status 7 (69.123125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-254000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-254000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.927690167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1aaf1c6d-4c7c-40c9-89d5-8b841ddc59cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-254000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca22a77a-680d-40b2-b8c8-d192475ff4bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19347"}}
	{"specversion":"1.0","id":"e75db731-b7de-42d0-8d72-490eddc83e1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig"}}
	{"specversion":"1.0","id":"79cb1910-1d86-478b-9d20-5a8c90044b2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8e7aae05-fd67-4047-933b-f52232d14eff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ee42b433-43aa-4e1a-b347-52d68fef1981","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube"}}
	{"specversion":"1.0","id":"8e830d2f-58c3-440f-8625-80b637fbaacf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b33d378-3eac-4e5a-8f2a-e28834f3023e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa348108-4197-4813-9764-5fcc8970e206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"69a0dcd0-86c7-4766-be6f-cc51bde5757e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-254000\" primary control-plane node in \"json-output-254000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1027c644-d816-4704-8d89-477b0732d017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"bdd16ccf-43e1-4335-b1ff-19dabe5761bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-254000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"405e8c88-0ecc-4449-8516-5aba1b382f7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"63193695-9b4c-4e38-bac1-e6ffcebec313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"5b06ba28-70cd-42bc-bbe7-a9064ee387e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-254000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"12d83c75-2d58-4c51-bc3b-30bf04816e19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"fd7178d1-488f-4f20-a776-b696be387f9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-254000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.93s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-254000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-254000 --output=json --user=testUser: exit status 83 (74.125709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ab746747-c1d1-46c9-9fb2-3ceb51d3c4ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-254000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8930a9ac-3046-45fb-905f-39f2f4eadc6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-254000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-254000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-254000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-254000 --output=json --user=testUser: exit status 83 (43.328791ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-254000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-254000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-254000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-254000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-034000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-034000 --driver=qemu2 : exit status 80 (9.912770042s)

                                                
                                                
-- stdout --
	* [first-034000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-034000" primary control-plane node in "first-034000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-034000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-034000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-034000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 16:36:55.729263 -0700 PDT m=+2045.831623251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-036000 -n second-036000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-036000 -n second-036000: exit status 85 (83.994875ms)

                                                
                                                
-- stdout --
	* Profile "second-036000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-036000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-036000" host is not running, skipping log retrieval (state="* Profile \"second-036000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-036000\"")
helpers_test.go:175: Cleaning up "second-036000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-036000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 16:36:55.917095 -0700 PDT m=+2046.019458168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-034000 -n first-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-034000 -n first-034000: exit status 7 (28.818917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-034000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-034000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-034000
--- FAIL: TestMinikubeProfile (10.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-340000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0729 16:37:01.319197    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-340000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.95758575s)

                                                
                                                
-- stdout --
	* [mount-start-1-340000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-340000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-340000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-340000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-340000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-340000 -n mount-start-1-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-340000 -n mount-start-1-340000: exit status 7 (67.724625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-100000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-100000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.090590125s)

                                                
                                                
-- stdout --
	* [multinode-100000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-100000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:37:06.251986    3680 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:37:06.252111    3680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:37:06.252114    3680 out.go:304] Setting ErrFile to fd 2...
	I0729 16:37:06.252116    3680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:37:06.252232    3680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:37:06.253288    3680 out.go:298] Setting JSON to false
	I0729 16:37:06.269207    3680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2189,"bootTime":1722294037,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:37:06.269267    3680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:37:06.276330    3680 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:37:06.284218    3680 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:37:06.284274    3680 notify.go:220] Checking for updates...
	I0729 16:37:06.291242    3680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:37:06.294195    3680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:37:06.297162    3680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:37:06.300264    3680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:37:06.303221    3680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:37:06.306425    3680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:37:06.310156    3680 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:37:06.317234    3680 start.go:297] selected driver: qemu2
	I0729 16:37:06.317242    3680 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:37:06.317249    3680 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:37:06.319482    3680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:37:06.322149    3680 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:37:06.323526    3680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:37:06.323544    3680 cni.go:84] Creating CNI manager for ""
	I0729 16:37:06.323549    3680 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 16:37:06.323563    3680 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 16:37:06.323597    3680 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:37:06.327258    3680 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:37:06.335191    3680 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0729 16:37:06.339156    3680 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:37:06.339178    3680 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:37:06.339191    3680 cache.go:56] Caching tarball of preloaded images
	I0729 16:37:06.339257    3680 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:37:06.339264    3680 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:37:06.339497    3680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/multinode-100000/config.json ...
	I0729 16:37:06.339511    3680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/multinode-100000/config.json: {Name:mkaf64bd94a4ebf1a1406cdd8b4b6829711dd2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:06.339734    3680 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:37:06.339773    3680 start.go:364] duration metric: took 33.5µs to acquireMachinesLock for "multinode-100000"
	I0729 16:37:06.339786    3680 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:37:06.339819    3680 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:37:06.348116    3680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:37:06.366113    3680 start.go:159] libmachine.API.Create for "multinode-100000" (driver="qemu2")
	I0729 16:37:06.366141    3680 client.go:168] LocalClient.Create starting
	I0729 16:37:06.366206    3680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:37:06.366240    3680 main.go:141] libmachine: Decoding PEM data...
	I0729 16:37:06.366256    3680 main.go:141] libmachine: Parsing certificate...
	I0729 16:37:06.366294    3680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:37:06.366318    3680 main.go:141] libmachine: Decoding PEM data...
	I0729 16:37:06.366328    3680 main.go:141] libmachine: Parsing certificate...
	I0729 16:37:06.366683    3680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:37:06.519624    3680 main.go:141] libmachine: Creating SSH key...
	I0729 16:37:06.695931    3680 main.go:141] libmachine: Creating Disk image...
	I0729 16:37:06.695937    3680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:37:06.696138    3680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:37:06.705495    3680 main.go:141] libmachine: STDOUT: 
	I0729 16:37:06.705515    3680 main.go:141] libmachine: STDERR: 
	I0729 16:37:06.705568    3680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2 +20000M
	I0729 16:37:06.713368    3680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:37:06.713382    3680 main.go:141] libmachine: STDERR: 
	I0729 16:37:06.713395    3680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:37:06.713399    3680 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:37:06.713415    3680 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:37:06.713439    3680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:2b:16:48:7c:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:37:06.715097    3680 main.go:141] libmachine: STDOUT: 
	I0729 16:37:06.715113    3680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:37:06.715130    3680 client.go:171] duration metric: took 348.990667ms to LocalClient.Create
	I0729 16:37:08.717382    3680 start.go:128] duration metric: took 2.377546833s to createHost
	I0729 16:37:08.717465    3680 start.go:83] releasing machines lock for "multinode-100000", held for 2.377715709s
	W0729 16:37:08.717513    3680 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:37:08.726654    3680 out.go:177] * Deleting "multinode-100000" in qemu2 ...
	W0729 16:37:08.758998    3680 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:37:08.759020    3680 start.go:729] Will try again in 5 seconds ...
	I0729 16:37:13.761098    3680 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:37:13.761547    3680 start.go:364] duration metric: took 362.459µs to acquireMachinesLock for "multinode-100000"
	I0729 16:37:13.761670    3680 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:37:13.762091    3680 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:37:13.772453    3680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:37:13.822857    3680 start.go:159] libmachine.API.Create for "multinode-100000" (driver="qemu2")
	I0729 16:37:13.822911    3680 client.go:168] LocalClient.Create starting
	I0729 16:37:13.823027    3680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:37:13.823101    3680 main.go:141] libmachine: Decoding PEM data...
	I0729 16:37:13.823124    3680 main.go:141] libmachine: Parsing certificate...
	I0729 16:37:13.823193    3680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:37:13.823239    3680 main.go:141] libmachine: Decoding PEM data...
	I0729 16:37:13.823252    3680 main.go:141] libmachine: Parsing certificate...
	I0729 16:37:13.823785    3680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:37:13.984436    3680 main.go:141] libmachine: Creating SSH key...
	I0729 16:37:14.247686    3680 main.go:141] libmachine: Creating Disk image...
	I0729 16:37:14.247699    3680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:37:14.247923    3680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:37:14.257228    3680 main.go:141] libmachine: STDOUT: 
	I0729 16:37:14.257254    3680 main.go:141] libmachine: STDERR: 
	I0729 16:37:14.257324    3680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2 +20000M
	I0729 16:37:14.265383    3680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:37:14.265404    3680 main.go:141] libmachine: STDERR: 
	I0729 16:37:14.265421    3680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:37:14.265425    3680 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:37:14.265437    3680 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:37:14.265474    3680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:cd:0a:ec:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:37:14.267093    3680 main.go:141] libmachine: STDOUT: 
	I0729 16:37:14.267108    3680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:37:14.267122    3680 client.go:171] duration metric: took 444.212417ms to LocalClient.Create
	I0729 16:37:16.269353    3680 start.go:128] duration metric: took 2.507204625s to createHost
	I0729 16:37:16.269438    3680 start.go:83] releasing machines lock for "multinode-100000", held for 2.507901959s
	W0729 16:37:16.269817    3680 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-100000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-100000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:37:16.278464    3680 out.go:177] 
	W0729 16:37:16.286497    3680 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:37:16.286525    3680 out.go:239] * 
	* 
	W0729 16:37:16.289355    3680 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:37:16.301378    3680 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-100000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (66.036458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (108.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (125.3675ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-100000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- rollout status deployment/busybox: exit status 1 (56.490042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.188667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.546ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.844792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.511167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.117083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.74075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.704333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.720667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.773584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.991542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0729 16:38:41.917019    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.6895ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.0915ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.447875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.022416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.578541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (28.891541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (108.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.35175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.335834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-100000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-100000 -v 3 --alsologtostderr: exit status 83 (39.037916ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-100000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-100000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:04.504603    3791 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:04.504779    3791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.504783    3791 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:04.504785    3791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.504918    3791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:04.505181    3791 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:04.505369    3791 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:04.509284    3791 out.go:177] * The control-plane node multinode-100000 host is not running: state=Stopped
	I0729 16:39:04.512282    3791 out.go:177]   To start a cluster, run: "minikube start -p multinode-100000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-100000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-100000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-100000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.066125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-100000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-100000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-100000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.253125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-100000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-100000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-100000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-100000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (28.895875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status --output json --alsologtostderr: exit status 7 (29.204292ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-100000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:04.704734    3803 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:04.704911    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.704915    3803 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:04.704917    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.705047    3803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:04.705178    3803 out.go:298] Setting JSON to true
	I0729 16:39:04.705188    3803 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:04.705255    3803 notify.go:220] Checking for updates...
	I0729 16:39:04.705415    3803 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:04.705422    3803 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:04.705632    3803 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:04.705636    3803 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:04.705638    3803 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-100000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.595083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 node stop m03: exit status 85 (46.293084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-100000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status: exit status 7 (29.818375ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr: exit status 7 (29.072ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:04.840397    3811 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:04.840534    3811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.840538    3811 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:04.840543    3811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.840673    3811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:04.840809    3811 out.go:298] Setting JSON to false
	I0729 16:39:04.840819    3811 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:04.840881    3811 notify.go:220] Checking for updates...
	I0729 16:39:04.841014    3811 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:04.841022    3811 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:04.841226    3811 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:04.841230    3811 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:04.841232    3811 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr": multinode-100000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.443959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (53.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.279625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:04.899690    3815 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:04.899924    3815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.899927    3815 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:04.899930    3815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.900066    3815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:04.900293    3815 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:04.900485    3815 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:04.905260    3815 out.go:177] 
	W0729 16:39:04.908327    3815 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 16:39:04.908332    3815 out.go:239] * 
	* 
	W0729 16:39:04.909912    3815 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:39:04.913339    3815 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 16:39:04.899690    3815 out.go:291] Setting OutFile to fd 1 ...
I0729 16:39:04.899924    3815 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:39:04.899927    3815 out.go:304] Setting ErrFile to fd 2...
I0729 16:39:04.899930    3815 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:39:04.900066    3815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
I0729 16:39:04.900293    3815 mustload.go:65] Loading cluster: multinode-100000
I0729 16:39:04.900485    3815 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:39:04.905260    3815 out.go:177] 
W0729 16:39:04.908327    3815 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 16:39:04.908332    3815 out.go:239] * 
* 
W0729 16:39:04.909912    3815 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:39:04.913339    3815 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-100000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (28.722459ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:04.945304    3817 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:04.945456    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.945465    3817 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:04.945467    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:04.945595    3817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:04.945711    3817 out.go:298] Setting JSON to false
	I0729 16:39:04.945720    3817 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:04.945776    3817 notify.go:220] Checking for updates...
	I0729 16:39:04.945909    3817 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:04.945916    3817 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:04.946116    3817 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:04.946119    3817 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:04.946121    3817 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (71.920166ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:06.212862    3819 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:06.213066    3819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:06.213071    3819 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:06.213074    3819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:06.213241    3819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:06.213394    3819 out.go:298] Setting JSON to false
	I0729 16:39:06.213414    3819 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:06.213455    3819 notify.go:220] Checking for updates...
	I0729 16:39:06.213679    3819 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:06.213688    3819 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:06.213979    3819 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:06.213984    3819 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:06.213987    3819 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (74.217458ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:07.108114    3821 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:07.108321    3821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:07.108326    3821 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:07.108329    3821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:07.108512    3821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:07.108684    3821 out.go:298] Setting JSON to false
	I0729 16:39:07.108697    3821 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:07.108729    3821 notify.go:220] Checking for updates...
	I0729 16:39:07.108951    3821 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:07.108960    3821 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:07.109224    3821 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:07.109229    3821 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:07.109232    3821 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (70.856542ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:08.545870    3823 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:08.546075    3823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:08.546079    3823 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:08.546083    3823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:08.546246    3823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:08.546400    3823 out.go:298] Setting JSON to false
	I0729 16:39:08.546411    3823 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:08.546453    3823 notify.go:220] Checking for updates...
	I0729 16:39:08.546652    3823 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:08.546662    3823 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:08.546942    3823 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:08.546947    3823 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:08.546950    3823 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (73.823417ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:13.563612    3825 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:13.563792    3825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:13.563796    3825 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:13.563799    3825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:13.563971    3825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:13.564126    3825 out.go:298] Setting JSON to false
	I0729 16:39:13.564139    3825 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:13.564198    3825 notify.go:220] Checking for updates...
	I0729 16:39:13.564411    3825 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:13.564420    3825 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:13.564697    3825 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:13.564702    3825 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:13.564705    3825 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (72.656959ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:21.128102    3829 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:21.128319    3829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:21.128324    3829 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:21.128327    3829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:21.128519    3829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:21.128662    3829 out.go:298] Setting JSON to false
	I0729 16:39:21.128675    3829 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:21.128713    3829 notify.go:220] Checking for updates...
	I0729 16:39:21.128915    3829 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:21.128923    3829 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:21.129201    3829 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:21.129206    3829 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:21.129209    3829 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (72.867917ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:28.563627    3831 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:28.563870    3831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:28.563875    3831 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:28.563879    3831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:28.564084    3831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:28.564259    3831 out.go:298] Setting JSON to false
	I0729 16:39:28.564271    3831 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:28.564315    3831 notify.go:220] Checking for updates...
	I0729 16:39:28.564571    3831 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:28.564584    3831 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:28.564909    3831 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:28.564915    3831 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:28.564918    3831 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (71.141375ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:36.762111    3838 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:36.762312    3838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:36.762317    3838 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:36.762320    3838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:36.762491    3838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:36.762644    3838 out.go:298] Setting JSON to false
	I0729 16:39:36.762658    3838 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:36.762692    3838 notify.go:220] Checking for updates...
	I0729 16:39:36.762936    3838 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:36.762944    3838 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:36.763216    3838 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:36.763221    3838 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:36.763224    3838 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 7 (71.711375ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:39:58.662928    3850 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:39:58.663132    3850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:58.663137    3850 out.go:304] Setting ErrFile to fd 2...
	I0729 16:39:58.663140    3850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:39:58.663311    3850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:39:58.663461    3850 out.go:298] Setting JSON to false
	I0729 16:39:58.663476    3850 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:39:58.663519    3850 notify.go:220] Checking for updates...
	I0729 16:39:58.663737    3850 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:39:58.663747    3850 status.go:255] checking status of multinode-100000 ...
	I0729 16:39:58.664034    3850 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:39:58.664039    3850 status.go:343] host is not running, skipping remaining checks
	I0729 16:39:58.664042    3850 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-100000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (32.816833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (53.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-100000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-100000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-100000: (2.898961542s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.220042791s)

                                                
                                                
-- stdout --
	* [multinode-100000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	* Restarting existing qemu2 VM for "multinode-100000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-100000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:40:01.689204    3874 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:40:01.689377    3874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:01.689381    3874 out.go:304] Setting ErrFile to fd 2...
	I0729 16:40:01.689384    3874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:01.689542    3874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:40:01.690748    3874 out.go:298] Setting JSON to false
	I0729 16:40:01.709884    3874 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2364,"bootTime":1722294037,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:40:01.709961    3874 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:40:01.714920    3874 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:40:01.721769    3874 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:40:01.721810    3874 notify.go:220] Checking for updates...
	I0729 16:40:01.728771    3874 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:40:01.731720    3874 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:40:01.734728    3874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:40:01.737655    3874 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:40:01.740730    3874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:40:01.744033    3874 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:40:01.744104    3874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:40:01.748694    3874 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:40:01.755684    3874 start.go:297] selected driver: qemu2
	I0729 16:40:01.755691    3874 start.go:901] validating driver "qemu2" against &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:40:01.755741    3874 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:40:01.758094    3874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:40:01.758142    3874 cni.go:84] Creating CNI manager for ""
	I0729 16:40:01.758148    3874 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:40:01.758196    3874 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:40:01.761955    3874 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:01.769706    3874 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0729 16:40:01.773505    3874 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:40:01.773521    3874 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:40:01.773529    3874 cache.go:56] Caching tarball of preloaded images
	I0729 16:40:01.773592    3874 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:40:01.773598    3874 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:40:01.773649    3874 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/multinode-100000/config.json ...
	I0729 16:40:01.774085    3874 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:40:01.774123    3874 start.go:364] duration metric: took 31.834µs to acquireMachinesLock for "multinode-100000"
	I0729 16:40:01.774134    3874 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:40:01.774141    3874 fix.go:54] fixHost starting: 
	I0729 16:40:01.774268    3874 fix.go:112] recreateIfNeeded on multinode-100000: state=Stopped err=<nil>
	W0729 16:40:01.774277    3874 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:40:01.782707    3874 out.go:177] * Restarting existing qemu2 VM for "multinode-100000" ...
	I0729 16:40:01.786684    3874 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:40:01.786729    3874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:cd:0a:ec:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:40:01.788975    3874 main.go:141] libmachine: STDOUT: 
	I0729 16:40:01.788999    3874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:40:01.789028    3874 fix.go:56] duration metric: took 14.887958ms for fixHost
	I0729 16:40:01.789034    3874 start.go:83] releasing machines lock for "multinode-100000", held for 14.906417ms
	W0729 16:40:01.789040    3874 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:40:01.789076    3874 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:40:01.789081    3874 start.go:729] Will try again in 5 seconds ...
	I0729 16:40:06.791153    3874 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:40:06.791533    3874 start.go:364] duration metric: took 277.834µs to acquireMachinesLock for "multinode-100000"
	I0729 16:40:06.791648    3874 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:40:06.791666    3874 fix.go:54] fixHost starting: 
	I0729 16:40:06.792359    3874 fix.go:112] recreateIfNeeded on multinode-100000: state=Stopped err=<nil>
	W0729 16:40:06.792384    3874 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:40:06.796802    3874 out.go:177] * Restarting existing qemu2 VM for "multinode-100000" ...
	I0729 16:40:06.804715    3874 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:40:06.804969    3874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:cd:0a:ec:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:40:06.813794    3874 main.go:141] libmachine: STDOUT: 
	I0729 16:40:06.813848    3874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:40:06.813916    3874 fix.go:56] duration metric: took 22.252125ms for fixHost
	I0729 16:40:06.813936    3874 start.go:83] releasing machines lock for "multinode-100000", held for 22.376792ms
	W0729 16:40:06.814090    3874 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-100000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-100000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:40:06.821709    3874 out.go:177] 
	W0729 16:40:06.825863    3874 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:40:06.825893    3874 out.go:239] * 
	* 
	W0729 16:40:06.828726    3874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:40:06.834742    3874 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-100000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-100000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (32.125792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 node delete m03: exit status 83 (39.496875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-100000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-100000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-100000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr: exit status 7 (29.414625ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:40:07.010938    3888 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:40:07.011094    3888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:07.011097    3888 out.go:304] Setting ErrFile to fd 2...
	I0729 16:40:07.011099    3888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:07.011231    3888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:40:07.011340    3888 out.go:298] Setting JSON to false
	I0729 16:40:07.011349    3888 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:40:07.011413    3888 notify.go:220] Checking for updates...
	I0729 16:40:07.011529    3888 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:40:07.011536    3888 status.go:255] checking status of multinode-100000 ...
	I0729 16:40:07.011747    3888 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:40:07.011752    3888 status.go:343] host is not running, skipping remaining checks
	I0729 16:40:07.011754    3888 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.766625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-100000 stop: (3.460294875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status: exit status 7 (65.999917ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr: exit status 7 (31.85075ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:40:10.599496    3912 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:40:10.599647    3912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:10.599651    3912 out.go:304] Setting ErrFile to fd 2...
	I0729 16:40:10.599653    3912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:10.599803    3912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:40:10.599916    3912 out.go:298] Setting JSON to false
	I0729 16:40:10.599926    3912 mustload.go:65] Loading cluster: multinode-100000
	I0729 16:40:10.599990    3912 notify.go:220] Checking for updates...
	I0729 16:40:10.600138    3912 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:40:10.600144    3912 status.go:255] checking status of multinode-100000 ...
	I0729 16:40:10.600344    3912 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0729 16:40:10.600348    3912 status.go:343] host is not running, skipping remaining checks
	I0729 16:40:10.600350    3912 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr": multinode-100000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-100000 status --alsologtostderr": multinode-100000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.667667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.177424083s)

                                                
                                                
-- stdout --
	* [multinode-100000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	* Restarting existing qemu2 VM for "multinode-100000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-100000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:40:10.657581    3916 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:40:10.657697    3916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:10.657701    3916 out.go:304] Setting ErrFile to fd 2...
	I0729 16:40:10.657703    3916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:10.657823    3916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:40:10.658851    3916 out.go:298] Setting JSON to false
	I0729 16:40:10.674746    3916 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2373,"bootTime":1722294037,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:40:10.674812    3916 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:40:10.680195    3916 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:40:10.688074    3916 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:40:10.688117    3916 notify.go:220] Checking for updates...
	I0729 16:40:10.694046    3916 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:40:10.697045    3916 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:40:10.698303    3916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:40:10.701035    3916 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:40:10.704080    3916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:40:10.707445    3916 config.go:182] Loaded profile config "multinode-100000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:40:10.707698    3916 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:40:10.711974    3916 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:40:10.719069    3916 start.go:297] selected driver: qemu2
	I0729 16:40:10.719078    3916 start.go:901] validating driver "qemu2" against &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:40:10.719148    3916 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:40:10.721264    3916 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:40:10.721282    3916 cni.go:84] Creating CNI manager for ""
	I0729 16:40:10.721287    3916 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:40:10.721330    3916 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:40:10.724597    3916 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:10.732050    3916 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0729 16:40:10.735932    3916 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:40:10.735948    3916 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:40:10.735959    3916 cache.go:56] Caching tarball of preloaded images
	I0729 16:40:10.736016    3916 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:40:10.736021    3916 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:40:10.736096    3916 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/multinode-100000/config.json ...
	I0729 16:40:10.736506    3916 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:40:10.736534    3916 start.go:364] duration metric: took 22.417µs to acquireMachinesLock for "multinode-100000"
	I0729 16:40:10.736543    3916 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:40:10.736550    3916 fix.go:54] fixHost starting: 
	I0729 16:40:10.736674    3916 fix.go:112] recreateIfNeeded on multinode-100000: state=Stopped err=<nil>
	W0729 16:40:10.736682    3916 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:40:10.741055    3916 out.go:177] * Restarting existing qemu2 VM for "multinode-100000" ...
	I0729 16:40:10.749030    3916 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:40:10.749072    3916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:cd:0a:ec:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:40:10.751022    3916 main.go:141] libmachine: STDOUT: 
	I0729 16:40:10.751038    3916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:40:10.751064    3916 fix.go:56] duration metric: took 14.513875ms for fixHost
	I0729 16:40:10.751069    3916 start.go:83] releasing machines lock for "multinode-100000", held for 14.531166ms
	W0729 16:40:10.751074    3916 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:40:10.751109    3916 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:40:10.751114    3916 start.go:729] Will try again in 5 seconds ...
	I0729 16:40:15.753178    3916 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:40:15.753571    3916 start.go:364] duration metric: took 317.125µs to acquireMachinesLock for "multinode-100000"
	I0729 16:40:15.753707    3916 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:40:15.753726    3916 fix.go:54] fixHost starting: 
	I0729 16:40:15.754407    3916 fix.go:112] recreateIfNeeded on multinode-100000: state=Stopped err=<nil>
	W0729 16:40:15.754432    3916 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:40:15.757756    3916 out.go:177] * Restarting existing qemu2 VM for "multinode-100000" ...
	I0729 16:40:15.764799    3916 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:40:15.765126    3916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:cd:0a:ec:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/multinode-100000/disk.qcow2
	I0729 16:40:15.773863    3916 main.go:141] libmachine: STDOUT: 
	I0729 16:40:15.773935    3916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:40:15.774039    3916 fix.go:56] duration metric: took 20.312625ms for fixHost
	I0729 16:40:15.774069    3916 start.go:83] releasing machines lock for "multinode-100000", held for 20.457458ms
	W0729 16:40:15.774307    3916 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-100000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-100000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:40:15.781808    3916 out.go:177] 
	W0729 16:40:15.785841    3916 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:40:15.785864    3916 out.go:239] * 
	* 
	W0729 16:40:15.788502    3916 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:40:15.795726    3916 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (66.397291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-100000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-100000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-100000-m01 --driver=qemu2 : exit status 80 (9.914275291s)

                                                
                                                
-- stdout --
	* [multinode-100000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-100000-m01" primary control-plane node in "multinode-100000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-100000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-100000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-100000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-100000-m02 --driver=qemu2 : exit status 80 (9.972603291s)

                                                
                                                
-- stdout --
	* [multinode-100000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-100000-m02" primary control-plane node in "multinode-100000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-100000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-100000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-100000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-100000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-100000: exit status 83 (79.264416ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-100000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-100000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-100000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-100000 -n multinode-100000: exit status 7 (29.642417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-100000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                    
x
+
TestPreload (10.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-714000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-714000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.988813709s)

                                                
                                                
-- stdout --
	* [test-preload-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-714000" primary control-plane node in "test-preload-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:40:36.124190    3979 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:40:36.124320    3979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:36.124322    3979 out.go:304] Setting ErrFile to fd 2...
	I0729 16:40:36.124325    3979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:40:36.124473    3979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:40:36.125523    3979 out.go:298] Setting JSON to false
	I0729 16:40:36.141477    3979 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2399,"bootTime":1722294037,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:40:36.141544    3979 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:40:36.147719    3979 out.go:177] * [test-preload-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:40:36.155668    3979 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:40:36.155730    3979 notify.go:220] Checking for updates...
	I0729 16:40:36.163676    3979 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:40:36.166642    3979 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:40:36.169667    3979 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:40:36.172681    3979 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:40:36.175650    3979 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:40:36.179050    3979 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:40:36.179102    3979 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:40:36.183546    3979 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:40:36.190661    3979 start.go:297] selected driver: qemu2
	I0729 16:40:36.190666    3979 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:40:36.190672    3979 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:40:36.193085    3979 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:40:36.196726    3979 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:40:36.199722    3979 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:40:36.199751    3979 cni.go:84] Creating CNI manager for ""
	I0729 16:40:36.199759    3979 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:40:36.199766    3979 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:40:36.199798    3979 start.go:340] cluster config:
	{Name:test-preload-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:40:36.203520    3979 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.211639    3979 out.go:177] * Starting "test-preload-714000" primary control-plane node in "test-preload-714000" cluster
	I0729 16:40:36.215685    3979 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 16:40:36.215775    3979 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/test-preload-714000/config.json ...
	I0729 16:40:36.215799    3979 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/test-preload-714000/config.json: {Name:mkeb401384a3494899738dc113782e7cc2be0b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:40:36.215798    3979 cache.go:107] acquiring lock: {Name:mk9b7516d94ba00b6a4aa7e39cdfccbd9abc18a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.215805    3979 cache.go:107] acquiring lock: {Name:mk711e3f6812e5d605daea81cdb406a093ed9f74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.215822    3979 cache.go:107] acquiring lock: {Name:mkf97fecc697bf8a1344b054aeca8634a4a7dadc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.215800    3979 cache.go:107] acquiring lock: {Name:mk405e43a24c91a3762347a1d44d5c016c786ad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.216016    3979 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:40:36.216011    3979 cache.go:107] acquiring lock: {Name:mk344953baac34d68c6c465fc42fe854ca065bc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.216052    3979 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 16:40:36.216037    3979 cache.go:107] acquiring lock: {Name:mk57a9cdc930dbc0e45937c02b82ece4cd33db32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.216070    3979 start.go:360] acquireMachinesLock for test-preload-714000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:40:36.216087    3979 cache.go:107] acquiring lock: {Name:mk4f0e4d3132b71decd006d70ae94e4ca4acb28c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.216102    3979 cache.go:107] acquiring lock: {Name:mkc12c822c6291b918a357df8c21d701a6a31a56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:40:36.216149    3979 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:40:36.216261    3979 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 16:40:36.216312    3979 start.go:364] duration metric: took 235.25µs to acquireMachinesLock for "test-preload-714000"
	I0729 16:40:36.216327    3979 start.go:93] Provisioning new machine with config: &{Name:test-preload-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:40:36.216355    3979 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:40:36.216381    3979 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:40:36.216429    3979 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:40:36.216459    3979 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 16:40:36.216484    3979 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 16:40:36.219661    3979 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:40:36.227348    3979 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:40:36.227396    3979 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 16:40:36.228145    3979 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:40:36.230227    3979 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 16:40:36.230311    3979 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 16:40:36.230386    3979 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:40:36.231068    3979 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 16:40:36.231086    3979 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:40:36.238466    3979 start.go:159] libmachine.API.Create for "test-preload-714000" (driver="qemu2")
	I0729 16:40:36.238485    3979 client.go:168] LocalClient.Create starting
	I0729 16:40:36.238572    3979 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:40:36.238604    3979 main.go:141] libmachine: Decoding PEM data...
	I0729 16:40:36.238614    3979 main.go:141] libmachine: Parsing certificate...
	I0729 16:40:36.238655    3979 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:40:36.238679    3979 main.go:141] libmachine: Decoding PEM data...
	I0729 16:40:36.238698    3979 main.go:141] libmachine: Parsing certificate...
	I0729 16:40:36.239092    3979 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:40:36.393684    3979 main.go:141] libmachine: Creating SSH key...
	I0729 16:40:36.621189    3979 main.go:141] libmachine: Creating Disk image...
	I0729 16:40:36.621207    3979 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:40:36.621411    3979 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2
	I0729 16:40:36.630563    3979 main.go:141] libmachine: STDOUT: 
	I0729 16:40:36.630580    3979 main.go:141] libmachine: STDERR: 
	I0729 16:40:36.630625    3979 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2 +20000M
	I0729 16:40:36.638747    3979 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:40:36.638760    3979 main.go:141] libmachine: STDERR: 
	I0729 16:40:36.638770    3979 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2
	I0729 16:40:36.638773    3979 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:40:36.638783    3979 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:40:36.638808    3979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:32:ec:29:99:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2
	I0729 16:40:36.640591    3979 main.go:141] libmachine: STDOUT: 
	I0729 16:40:36.640605    3979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:40:36.640622    3979 client.go:171] duration metric: took 402.139416ms to LocalClient.Create
	I0729 16:40:36.706376    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:40:36.717609    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 16:40:36.731624    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 16:40:36.756034    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 16:40:36.816972    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 16:40:36.866641    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:40:36.883038    3979 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 16:40:36.883075    3979 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 667.259125ms
	I0729 16:40:36.883105    3979 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 16:40:36.886705    3979 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:40:36.886758    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0729 16:40:37.011700    3979 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:40:37.011810    3979 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:40:37.212285    3979 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 16:40:37.212341    3979 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 996.559375ms
	I0729 16:40:37.212367    3979 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 16:40:38.448481    3979 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 16:40:38.448528    3979 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.232598583s
	I0729 16:40:38.448551    3979 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 16:40:38.641145    3979 start.go:128] duration metric: took 2.424789042s to createHost
	I0729 16:40:38.641199    3979 start.go:83] releasing machines lock for "test-preload-714000", held for 2.42491125s
	W0729 16:40:38.641264    3979 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:40:38.654445    3979 out.go:177] * Deleting "test-preload-714000" in qemu2 ...
	W0729 16:40:38.683544    3979 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:40:38.683582    3979 start.go:729] Will try again in 5 seconds ...
	I0729 16:40:39.743931    3979 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 16:40:39.743983    3979 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.528042791s
	I0729 16:40:39.744006    3979 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 16:40:39.786687    3979 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 16:40:39.786728    3979 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.57098575s
	I0729 16:40:39.786753    3979 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 16:40:41.523261    3979 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 16:40:41.523322    3979 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.307602625s
	I0729 16:40:41.523346    3979 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 16:40:42.935872    3979 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 16:40:42.935926    3979 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.720055875s
	I0729 16:40:42.935950    3979 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 16:40:43.684126    3979 start.go:360] acquireMachinesLock for test-preload-714000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:40:43.684659    3979 start.go:364] duration metric: took 452.542µs to acquireMachinesLock for "test-preload-714000"
	I0729 16:40:43.684787    3979 start.go:93] Provisioning new machine with config: &{Name:test-preload-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:40:43.685003    3979 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:40:43.694539    3979 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:40:43.744294    3979 start.go:159] libmachine.API.Create for "test-preload-714000" (driver="qemu2")
	I0729 16:40:43.744340    3979 client.go:168] LocalClient.Create starting
	I0729 16:40:43.744461    3979 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:40:43.744525    3979 main.go:141] libmachine: Decoding PEM data...
	I0729 16:40:43.744548    3979 main.go:141] libmachine: Parsing certificate...
	I0729 16:40:43.744619    3979 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:40:43.744663    3979 main.go:141] libmachine: Decoding PEM data...
	I0729 16:40:43.744677    3979 main.go:141] libmachine: Parsing certificate...
	I0729 16:40:43.745241    3979 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:40:43.909829    3979 main.go:141] libmachine: Creating SSH key...
	I0729 16:40:44.014917    3979 main.go:141] libmachine: Creating Disk image...
	I0729 16:40:44.014923    3979 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:40:44.015129    3979 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2
	I0729 16:40:44.024737    3979 main.go:141] libmachine: STDOUT: 
	I0729 16:40:44.024757    3979 main.go:141] libmachine: STDERR: 
	I0729 16:40:44.024816    3979 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2 +20000M
	I0729 16:40:44.032909    3979 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:40:44.032923    3979 main.go:141] libmachine: STDERR: 
	I0729 16:40:44.032942    3979 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2
	I0729 16:40:44.032951    3979 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:40:44.032959    3979 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:40:44.032995    3979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:85:7c:2d:61:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/test-preload-714000/disk.qcow2
	I0729 16:40:44.034742    3979 main.go:141] libmachine: STDOUT: 
	I0729 16:40:44.034788    3979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:40:44.034801    3979 client.go:171] duration metric: took 290.46075ms to LocalClient.Create
	I0729 16:40:46.035573    3979 start.go:128] duration metric: took 2.350565833s to createHost
	I0729 16:40:46.035645    3979 start.go:83] releasing machines lock for "test-preload-714000", held for 2.350995416s
	W0729 16:40:46.035937    3979 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:40:46.051615    3979 out.go:177] 
	W0729 16:40:46.055474    3979 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:40:46.055497    3979 out.go:239] * 
	* 
	W0729 16:40:46.058218    3979 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:40:46.070454    3979 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-714000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 16:40:46.087796 -0700 PDT m=+2276.193452043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-714000 -n test-preload-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-714000 -n test-preload-714000: exit status 7 (65.921791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-714000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-714000
--- FAIL: TestPreload (10.14s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-707000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-707000 --memory=2048 --driver=qemu2 : exit status 80 (9.94674475s)

                                                
                                                
-- stdout --
	* [scheduled-stop-707000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-707000" primary control-plane node in "scheduled-stop-707000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-707000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-707000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-707000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-707000" primary control-plane node in "scheduled-stop-707000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-707000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-707000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 16:40:56.184067 -0700 PDT m=+2286.289867668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-707000 -n scheduled-stop-707000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-707000 -n scheduled-stop-707000: exit status 7 (67.639209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-707000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-707000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-707000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (12.09s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1236221877 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1236221877 version: (1.068757875s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-775000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-775000 --memory=2600 --driver=qemu2 : exit status 80 (9.837152542s)

                                                
                                                
-- stdout --
	* [skaffold-775000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-775000" primary control-plane node in "skaffold-775000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-775000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-775000" primary control-plane node in "skaffold-775000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 16:41:08.2779 -0700 PDT m=+2298.383873668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-775000 -n skaffold-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-775000 -n skaffold-775000: exit status 7 (60.348875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-775000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-775000
--- FAIL: TestSkaffold (12.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (593.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3559195954 start -p running-upgrade-980000 --memory=2200 --vm-driver=qemu2 
E0729 16:42:01.314650    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3559195954 start -p running-upgrade-980000 --memory=2200 --vm-driver=qemu2 : (48.617538375s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-980000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 16:43:41.911591    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-980000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m30.14191575s)

                                                
                                                
-- stdout --
	* [running-upgrade-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-980000" primary control-plane node in "running-upgrade-980000" cluster
	* Updating the running qemu2 "running-upgrade-980000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:42:39.820352    4389 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:42:39.820491    4389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:42:39.820498    4389 out.go:304] Setting ErrFile to fd 2...
	I0729 16:42:39.820501    4389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:42:39.820653    4389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:42:39.821737    4389 out.go:298] Setting JSON to false
	I0729 16:42:39.838705    4389 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2522,"bootTime":1722294037,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:42:39.838799    4389 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:42:39.843722    4389 out.go:177] * [running-upgrade-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:42:39.850729    4389 notify.go:220] Checking for updates...
	I0729 16:42:39.854676    4389 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:42:39.858673    4389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:42:39.862639    4389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:42:39.865689    4389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:42:39.868675    4389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:42:39.871706    4389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:42:39.874897    4389 config.go:182] Loaded profile config "running-upgrade-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:42:39.878708    4389 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:42:39.881647    4389 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:42:39.885628    4389 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:42:39.891566    4389 start.go:297] selected driver: qemu2
	I0729 16:42:39.891573    4389 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:42:39.891615    4389 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:42:39.893899    4389 cni.go:84] Creating CNI manager for ""
	I0729 16:42:39.893913    4389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:42:39.893941    4389 start.go:340] cluster config:
	{Name:running-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:42:39.893989    4389 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:42:39.901689    4389 out.go:177] * Starting "running-upgrade-980000" primary control-plane node in "running-upgrade-980000" cluster
	I0729 16:42:39.905650    4389 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:42:39.905664    4389 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 16:42:39.905671    4389 cache.go:56] Caching tarball of preloaded images
	I0729 16:42:39.905713    4389 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:42:39.905718    4389 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 16:42:39.905765    4389 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/config.json ...
	I0729 16:42:39.906183    4389 start.go:360] acquireMachinesLock for running-upgrade-980000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:42:39.906210    4389 start.go:364] duration metric: took 21.042µs to acquireMachinesLock for "running-upgrade-980000"
	I0729 16:42:39.906218    4389 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:42:39.906223    4389 fix.go:54] fixHost starting: 
	I0729 16:42:39.906803    4389 fix.go:112] recreateIfNeeded on running-upgrade-980000: state=Running err=<nil>
	W0729 16:42:39.906811    4389 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:42:39.915610    4389 out.go:177] * Updating the running qemu2 "running-upgrade-980000" VM ...
	I0729 16:42:39.921701    4389 machine.go:94] provisionDockerMachine start ...
	I0729 16:42:39.921753    4389 main.go:141] libmachine: Using SSH client type: native
	I0729 16:42:39.921871    4389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b5ea10] 0x104b61270 <nil>  [] 0s} localhost 50270 <nil> <nil>}
	I0729 16:42:39.921877    4389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 16:42:39.981205    4389 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-980000
	
	I0729 16:42:39.981221    4389 buildroot.go:166] provisioning hostname "running-upgrade-980000"
	I0729 16:42:39.981264    4389 main.go:141] libmachine: Using SSH client type: native
	I0729 16:42:39.981385    4389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b5ea10] 0x104b61270 <nil>  [] 0s} localhost 50270 <nil> <nil>}
	I0729 16:42:39.981390    4389 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-980000 && echo "running-upgrade-980000" | sudo tee /etc/hostname
	I0729 16:42:40.043502    4389 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-980000
	
	I0729 16:42:40.043551    4389 main.go:141] libmachine: Using SSH client type: native
	I0729 16:42:40.043661    4389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b5ea10] 0x104b61270 <nil>  [] 0s} localhost 50270 <nil> <nil>}
	I0729 16:42:40.043669    4389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-980000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-980000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-980000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:42:40.098335    4389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:42:40.098349    4389 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19347-923/.minikube CaCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19347-923/.minikube}
	I0729 16:42:40.098358    4389 buildroot.go:174] setting up certificates
	I0729 16:42:40.098363    4389 provision.go:84] configureAuth start
	I0729 16:42:40.098370    4389 provision.go:143] copyHostCerts
	I0729 16:42:40.098433    4389 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem, removing ...
	I0729 16:42:40.098439    4389 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem
	I0729 16:42:40.098547    4389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem (1679 bytes)
	I0729 16:42:40.098698    4389 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem, removing ...
	I0729 16:42:40.098703    4389 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem
	I0729 16:42:40.098751    4389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem (1082 bytes)
	I0729 16:42:40.098844    4389 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem, removing ...
	I0729 16:42:40.098848    4389 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem
	I0729 16:42:40.098889    4389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem (1123 bytes)
	I0729 16:42:40.098975    4389 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-980000 san=[127.0.0.1 localhost minikube running-upgrade-980000]
	I0729 16:42:40.224650    4389 provision.go:177] copyRemoteCerts
	I0729 16:42:40.224739    4389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:42:40.224752    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	I0729 16:42:40.255149    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 16:42:40.262362    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 16:42:40.270115    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 16:42:40.277081    4389 provision.go:87] duration metric: took 178.716709ms to configureAuth
	I0729 16:42:40.277092    4389 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:42:40.277205    4389 config.go:182] Loaded profile config "running-upgrade-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:42:40.277241    4389 main.go:141] libmachine: Using SSH client type: native
	I0729 16:42:40.277331    4389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b5ea10] 0x104b61270 <nil>  [] 0s} localhost 50270 <nil> <nil>}
	I0729 16:42:40.277335    4389 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 16:42:40.335399    4389 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 16:42:40.335410    4389 buildroot.go:70] root file system type: tmpfs
	I0729 16:42:40.335463    4389 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 16:42:40.335515    4389 main.go:141] libmachine: Using SSH client type: native
	I0729 16:42:40.335633    4389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b5ea10] 0x104b61270 <nil>  [] 0s} localhost 50270 <nil> <nil>}
	I0729 16:42:40.335666    4389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 16:42:40.395952    4389 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 16:42:40.396006    4389 main.go:141] libmachine: Using SSH client type: native
	I0729 16:42:40.396122    4389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b5ea10] 0x104b61270 <nil>  [] 0s} localhost 50270 <nil> <nil>}
	I0729 16:42:40.396131    4389 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 16:42:40.454037    4389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:42:40.454047    4389 machine.go:97] duration metric: took 532.347792ms to provisionDockerMachine
	I0729 16:42:40.454052    4389 start.go:293] postStartSetup for "running-upgrade-980000" (driver="qemu2")
	I0729 16:42:40.454058    4389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:42:40.454119    4389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:42:40.454127    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	I0729 16:42:40.483671    4389 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:42:40.485017    4389 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 16:42:40.485024    4389 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/addons for local assets ...
	I0729 16:42:40.485090    4389 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/files for local assets ...
	I0729 16:42:40.485175    4389 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem -> 13902.pem in /etc/ssl/certs
	I0729 16:42:40.485264    4389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 16:42:40.487850    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:42:40.494884    4389 start.go:296] duration metric: took 40.827875ms for postStartSetup
	I0729 16:42:40.494899    4389 fix.go:56] duration metric: took 588.685875ms for fixHost
	I0729 16:42:40.494933    4389 main.go:141] libmachine: Using SSH client type: native
	I0729 16:42:40.495037    4389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b5ea10] 0x104b61270 <nil>  [] 0s} localhost 50270 <nil> <nil>}
	I0729 16:42:40.495041    4389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 16:42:40.553030    4389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722296560.232963513
	
	I0729 16:42:40.553039    4389 fix.go:216] guest clock: 1722296560.232963513
	I0729 16:42:40.553043    4389 fix.go:229] Guest: 2024-07-29 16:42:40.232963513 -0700 PDT Remote: 2024-07-29 16:42:40.494901 -0700 PDT m=+0.694645709 (delta=-261.937487ms)
	I0729 16:42:40.553058    4389 fix.go:200] guest clock delta is within tolerance: -261.937487ms
	I0729 16:42:40.553062    4389 start.go:83] releasing machines lock for "running-upgrade-980000", held for 646.85725ms
	I0729 16:42:40.553136    4389 ssh_runner.go:195] Run: cat /version.json
	I0729 16:42:40.553146    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	I0729 16:42:40.553136    4389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:42:40.553192    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	W0729 16:42:40.553695    4389 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50270: connect: connection refused
	I0729 16:42:40.553716    4389 retry.go:31] will retry after 220.952675ms: dial tcp [::1]:50270: connect: connection refused
	W0729 16:42:40.583401    4389 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 16:42:40.583462    4389 ssh_runner.go:195] Run: systemctl --version
	I0729 16:42:40.585543    4389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:42:40.587243    4389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:42:40.587268    4389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 16:42:40.590059    4389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 16:42:40.594072    4389 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:42:40.594078    4389 start.go:495] detecting cgroup driver to use...
	I0729 16:42:40.594148    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:42:40.599517    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 16:42:40.602770    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 16:42:40.605995    4389 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 16:42:40.606016    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 16:42:40.609376    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:42:40.612465    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 16:42:40.615647    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:42:40.619039    4389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:42:40.622506    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 16:42:40.625330    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 16:42:40.628132    4389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 16:42:40.631138    4389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:42:40.634266    4389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:42:40.637030    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:42:40.712895    4389 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 16:42:40.720345    4389 start.go:495] detecting cgroup driver to use...
	I0729 16:42:40.720425    4389 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 16:42:40.728060    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:42:40.733001    4389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:42:40.739210    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:42:40.743986    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:42:40.748239    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:42:40.753521    4389 ssh_runner.go:195] Run: which cri-dockerd
	I0729 16:42:40.754838    4389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 16:42:40.761821    4389 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 16:42:40.767501    4389 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 16:42:40.848908    4389 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 16:42:40.923012    4389 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 16:42:40.923062    4389 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 16:42:40.930695    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:42:41.008451    4389 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:42:53.580674    4389 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.572385917s)
	I0729 16:42:53.580891    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 16:42:53.585744    4389 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 16:42:53.592845    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:42:53.598929    4389 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 16:42:53.653127    4389 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 16:42:53.727803    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:42:53.789271    4389 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 16:42:53.794837    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:42:53.800059    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:42:53.862248    4389 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 16:42:53.900375    4389 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 16:42:53.900457    4389 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 16:42:53.902872    4389 start.go:563] Will wait 60s for crictl version
	I0729 16:42:53.902925    4389 ssh_runner.go:195] Run: which crictl
	I0729 16:42:53.904404    4389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:42:53.915806    4389 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 16:42:53.915875    4389 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:42:53.931297    4389 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:42:53.946919    4389 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 16:42:53.946986    4389 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 16:42:53.948263    4389 kubeadm.go:883] updating cluster {Name:running-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 16:42:53.948305    4389 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:42:53.948344    4389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:42:53.959099    4389 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:42:53.959108    4389 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:42:53.959153    4389 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:42:53.962407    4389 ssh_runner.go:195] Run: which lz4
	I0729 16:42:53.963687    4389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 16:42:53.964919    4389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:42:53.964932    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 16:42:54.868794    4389 docker.go:649] duration metric: took 905.161584ms to copy over tarball
	I0729 16:42:54.868848    4389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:42:56.067631    4389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.198785708s)
	I0729 16:42:56.067646    4389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:42:56.083258    4389 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:42:56.086278    4389 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 16:42:56.091315    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:42:56.155942    4389 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:42:56.339102    4389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:42:56.353563    4389 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:42:56.353571    4389 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:42:56.353577    4389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 16:42:56.358975    4389 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:42:56.360900    4389 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:42:56.362027    4389 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:42:56.362113    4389 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:42:56.363491    4389 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:42:56.363519    4389 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:42:56.364773    4389 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:42:56.364823    4389 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:42:56.366172    4389 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:42:56.366171    4389 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:42:56.367779    4389 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:42:56.367804    4389 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:42:56.368772    4389 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:42:56.368817    4389 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:42:56.370158    4389 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:42:56.370799    4389 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:42:56.796820    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:42:56.809337    4389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 16:42:56.809364    4389 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:42:56.809419    4389 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:42:56.821348    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 16:42:56.821760    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:42:56.831747    4389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 16:42:56.831767    4389 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:42:56.831812    4389 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:42:56.838513    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:42:56.848086    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:42:56.851386    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 16:42:56.863201    4389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 16:42:56.863225    4389 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:42:56.863287    4389 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:42:56.864590    4389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 16:42:56.864606    4389 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:42:56.864639    4389 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:42:56.867151    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 16:42:56.873493    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 16:42:56.877335    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 16:42:56.879604    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0729 16:42:56.884564    4389 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:42:56.884707    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:42:56.889262    4389 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 16:42:56.889284    4389 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 16:42:56.889328    4389 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 16:42:56.893847    4389 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 16:42:56.893864    4389 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:42:56.893911    4389 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 16:42:56.904027    4389 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 16:42:56.904049    4389 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:42:56.904102    4389 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:42:56.909312    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:42:56.909327    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:42:56.909425    4389 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 16:42:56.915331    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:42:56.915334    4389 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 16:42:56.915355    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 16:42:56.915434    4389 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:42:56.918029    4389 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 16:42:56.918041    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 16:42:56.925684    4389 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 16:42:56.925699    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 16:42:56.983448    4389 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 16:42:56.983466    4389 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:42:56.983472    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 16:42:57.023637    4389 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0729 16:42:57.024579    4389 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:42:57.024703    4389 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:42:57.034653    4389 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 16:42:57.034676    4389 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:42:57.034727    4389 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:42:58.081930    4389 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.0471485s)
	I0729 16:42:58.081967    4389 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:42:58.082392    4389 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:42:58.087550    4389 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 16:42:58.087584    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 16:42:58.147353    4389 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:42:58.147368    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 16:42:58.376548    4389 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 16:42:58.376582    4389 cache_images.go:92] duration metric: took 2.023028958s to LoadCachedImages
	W0729 16:42:58.376632    4389 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0729 16:42:58.376641    4389 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 16:42:58.376696    4389 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-980000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:42:58.376761    4389 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 16:42:58.390208    4389 cni.go:84] Creating CNI manager for ""
	I0729 16:42:58.390223    4389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:42:58.390228    4389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:42:58.390237    4389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-980000 NodeName:running-upgrade-980000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:42:58.390295    4389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-980000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:42:58.390355    4389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 16:42:58.393918    4389 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:42:58.393947    4389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:42:58.396649    4389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 16:42:58.401433    4389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:42:58.406412    4389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 16:42:58.411917    4389 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 16:42:58.413165    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:42:58.477487    4389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:42:58.483404    4389 certs.go:68] Setting up /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000 for IP: 10.0.2.15
	I0729 16:42:58.483413    4389 certs.go:194] generating shared ca certs ...
	I0729 16:42:58.483422    4389 certs.go:226] acquiring lock for ca certs: {Name:mk4279a132dfe000316d0221b0d97d4e537df506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:42:58.483574    4389 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key
	I0729 16:42:58.483608    4389 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key
	I0729 16:42:58.483612    4389 certs.go:256] generating profile certs ...
	I0729 16:42:58.483675    4389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/client.key
	I0729 16:42:58.483691    4389 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.key.bdfe408b
	I0729 16:42:58.483703    4389 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.crt.bdfe408b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 16:42:58.613772    4389 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.crt.bdfe408b ...
	I0729 16:42:58.613779    4389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.crt.bdfe408b: {Name:mk963f93c3bcec2857fe4aadc109626442541b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:42:58.614252    4389 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.key.bdfe408b ...
	I0729 16:42:58.614258    4389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.key.bdfe408b: {Name:mk59d883a3e7ef58ed65c1bbac36694c5dc8be5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:42:58.614413    4389 certs.go:381] copying /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.crt.bdfe408b -> /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.crt
	I0729 16:42:58.614545    4389 certs.go:385] copying /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.key.bdfe408b -> /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.key
	I0729 16:42:58.614679    4389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/proxy-client.key
	I0729 16:42:58.614816    4389 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem (1338 bytes)
	W0729 16:42:58.614838    4389 certs.go:480] ignoring /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390_empty.pem, impossibly tiny 0 bytes
	I0729 16:42:58.614845    4389 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 16:42:58.614864    4389 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem (1082 bytes)
	I0729 16:42:58.614882    4389 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:42:58.614900    4389 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem (1679 bytes)
	I0729 16:42:58.614938    4389 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:42:58.615266    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:42:58.622453    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:42:58.629524    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:42:58.637147    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 16:42:58.644218    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 16:42:58.650777    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:42:58.657675    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:42:58.664961    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 16:42:58.672371    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:42:58.678982    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem --> /usr/share/ca-certificates/1390.pem (1338 bytes)
	I0729 16:42:58.685485    4389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /usr/share/ca-certificates/13902.pem (1708 bytes)
	I0729 16:42:58.692840    4389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:42:58.697695    4389 ssh_runner.go:195] Run: openssl version
	I0729 16:42:58.699595    4389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:42:58.702554    4389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:42:58.704126    4389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:42:58.704144    4389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:42:58.705890    4389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:42:58.708792    4389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1390.pem && ln -fs /usr/share/ca-certificates/1390.pem /etc/ssl/certs/1390.pem"
	I0729 16:42:58.712110    4389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1390.pem
	I0729 16:42:58.713545    4389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:11 /usr/share/ca-certificates/1390.pem
	I0729 16:42:58.713566    4389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1390.pem
	I0729 16:42:58.715283    4389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1390.pem /etc/ssl/certs/51391683.0"
	I0729 16:42:58.717882    4389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13902.pem && ln -fs /usr/share/ca-certificates/13902.pem /etc/ssl/certs/13902.pem"
	I0729 16:42:58.720866    4389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13902.pem
	I0729 16:42:58.722403    4389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:11 /usr/share/ca-certificates/13902.pem
	I0729 16:42:58.722423    4389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13902.pem
	I0729 16:42:58.724309    4389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13902.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 16:42:58.727831    4389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:42:58.729536    4389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 16:42:58.731311    4389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 16:42:58.733391    4389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 16:42:58.735198    4389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 16:42:58.737221    4389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 16:42:58.739050    4389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 16:42:58.740830    4389 kubeadm.go:392] StartCluster: {Name:running-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:42:58.740892    4389 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:42:58.751375    4389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:42:58.754907    4389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 16:42:58.754913    4389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 16:42:58.754935    4389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 16:42:58.757694    4389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:42:58.757942    4389 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-980000" does not appear in /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:42:58.757995    4389 kubeconfig.go:62] /Users/jenkins/minikube-integration/19347-923/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-980000" cluster setting kubeconfig missing "running-upgrade-980000" context setting]
	I0729 16:42:58.758140    4389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:42:58.758844    4389 kapi.go:59] client config for running-upgrade-980000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/client.key", CAFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ef4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:42:58.759177    4389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 16:42:58.762474    4389 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-980000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 16:42:58.762480    4389 kubeadm.go:1160] stopping kube-system containers ...
	I0729 16:42:58.762524    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:42:58.772945    4389 docker.go:483] Stopping containers: [06e213131677 1613a3fa9e62 f945667ff622 f1081b26aebd 84e9a482e950 4fa2e5620cad 61d2b499931e 1b2dfc87f3de 7c093af5a7a3 40f7e7f0317e 51c2ed142db0 b61223da73ae]
	I0729 16:42:58.773012    4389 ssh_runner.go:195] Run: docker stop 06e213131677 1613a3fa9e62 f945667ff622 f1081b26aebd 84e9a482e950 4fa2e5620cad 61d2b499931e 1b2dfc87f3de 7c093af5a7a3 40f7e7f0317e 51c2ed142db0 b61223da73ae
	I0729 16:42:58.786507    4389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 16:42:58.876685    4389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:42:58.880964    4389 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 29 23:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 29 23:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 23:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 29 23:42 /etc/kubernetes/scheduler.conf
	
	I0729 16:42:58.880995    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/admin.conf
	I0729 16:42:58.884407    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:42:58.884432    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:42:58.888025    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/kubelet.conf
	I0729 16:42:58.891244    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:42:58.891270    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:42:58.894004    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/controller-manager.conf
	I0729 16:42:58.896597    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:42:58.896625    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:42:58.899454    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/scheduler.conf
	I0729 16:42:58.901993    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:42:58.902013    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:42:58.904531    4389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:42:58.907688    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:42:58.949952    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:42:59.611666    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:42:59.798501    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:42:59.820943    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:42:59.843542    4389 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:42:59.843622    4389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:43:00.345940    4389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:43:00.845680    4389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:43:00.850041    4389 api_server.go:72] duration metric: took 1.006515583s to wait for apiserver process to appear ...
	I0729 16:43:00.850050    4389 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:43:00.850059    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:05.850481    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:05.850525    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:10.852122    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:10.852199    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:15.853043    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:15.853155    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:20.854180    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:20.854262    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:25.855766    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:25.855811    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:30.857591    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:30.857684    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:35.859931    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:35.860000    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:40.862588    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:40.862670    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:45.865223    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:45.865288    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:50.867844    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:50.867928    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:55.870496    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:55.870576    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:00.873157    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:00.873410    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:00.895486    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:00.895605    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:00.910149    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:00.910238    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:00.922505    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:00.922578    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:00.933279    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:00.933353    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:00.947229    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:00.947292    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:00.957213    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:00.957294    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:00.967525    4389 logs.go:276] 0 containers: []
	W0729 16:44:00.967538    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:00.967598    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:00.978054    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:00.978070    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:00.978084    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:00.982688    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:00.982697    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:01.051076    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:01.051091    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:01.070701    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:01.070714    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:01.088184    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:01.088197    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:01.114627    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:01.114634    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:01.154794    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:01.154801    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:01.168631    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:01.168641    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:01.183446    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:01.183456    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:01.195452    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:01.195465    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:01.207209    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:01.207222    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:01.219781    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:01.219794    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:01.231322    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:01.231334    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:01.246157    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:01.246167    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:01.257763    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:01.257774    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:01.269533    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:01.269544    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:03.782867    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:08.783838    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:08.784140    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:08.817164    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:08.817285    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:08.833086    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:08.833177    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:08.845038    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:08.845109    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:08.855696    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:08.855777    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:08.865855    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:08.865917    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:08.876771    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:08.876833    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:08.887094    4389 logs.go:276] 0 containers: []
	W0729 16:44:08.887105    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:08.887178    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:08.897749    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:08.897764    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:08.897770    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:08.937358    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:08.937368    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:08.951866    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:08.951876    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:08.963205    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:08.963219    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:08.974813    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:08.974826    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:08.989173    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:08.989183    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:09.000593    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:09.000602    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:09.023206    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:09.023215    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:09.036117    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:09.036127    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:09.047733    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:09.047759    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:09.059247    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:09.059257    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:09.063801    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:09.063807    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:09.098643    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:09.098658    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:09.112259    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:09.112270    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:09.130137    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:09.130148    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:09.141649    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:09.141661    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:11.670095    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:16.672777    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:16.673237    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:16.711202    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:16.711364    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:16.733006    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:16.733095    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:16.748396    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:16.748471    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:16.761151    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:16.761216    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:16.776020    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:16.776089    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:16.787080    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:16.787151    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:16.797836    4389 logs.go:276] 0 containers: []
	W0729 16:44:16.797848    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:16.797927    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:16.811997    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:16.812014    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:16.812019    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:16.816223    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:16.816232    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:16.827200    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:16.827212    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:16.842267    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:16.842281    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:16.876283    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:16.876296    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:16.894641    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:16.894653    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:16.906444    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:16.906457    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:16.948622    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:16.948632    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:16.962917    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:16.962930    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:16.980678    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:16.980690    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:16.991569    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:16.991577    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:17.002703    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:17.002716    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:17.026697    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:17.026703    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:17.040734    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:17.040743    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:17.054786    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:17.054800    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:17.066486    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:17.066495    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:19.578017    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:24.580684    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:24.580850    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:24.595121    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:24.595197    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:24.606175    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:24.606240    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:24.620149    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:24.620214    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:24.630795    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:24.630872    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:24.641162    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:24.641226    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:24.651449    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:24.651515    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:24.661349    4389 logs.go:276] 0 containers: []
	W0729 16:44:24.661359    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:24.661412    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:24.673506    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:24.673527    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:24.673531    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:24.684832    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:24.684843    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:24.695799    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:24.695811    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:24.707824    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:24.707836    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:24.718911    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:24.718922    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:24.723023    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:24.723031    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:24.743185    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:24.743196    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:24.761944    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:24.761954    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:24.780073    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:24.780082    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:24.791404    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:24.791416    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:24.815743    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:24.815749    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:24.854641    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:24.854653    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:24.892917    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:24.892932    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:24.907009    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:24.907022    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:24.922020    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:24.922033    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:24.933394    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:24.933403    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:27.446227    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:32.448942    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:32.449218    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:32.473046    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:32.473162    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:32.490109    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:32.490186    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:32.502801    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:32.502871    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:32.514835    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:32.514901    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:32.527359    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:32.527428    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:32.537570    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:32.537627    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:32.548184    4389 logs.go:276] 0 containers: []
	W0729 16:44:32.548193    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:32.548243    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:32.558798    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:32.558812    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:32.558818    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:32.569495    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:32.569506    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:32.584034    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:32.584046    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:32.597676    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:32.597690    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:32.615652    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:32.615664    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:32.639691    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:32.639701    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:32.653463    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:32.653472    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:32.665020    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:32.665033    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:32.685774    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:32.685785    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:32.696698    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:32.696708    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:32.709297    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:32.709307    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:32.713493    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:32.713501    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:32.747966    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:32.747980    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:32.759520    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:32.759533    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:32.774465    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:32.774477    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:32.786235    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:32.786248    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:35.329195    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:40.332015    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:40.332427    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:40.369646    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:40.369777    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:40.391767    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:40.391875    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:40.407073    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:40.407156    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:40.423154    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:40.423226    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:40.433583    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:40.433656    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:40.443603    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:40.443672    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:40.458499    4389 logs.go:276] 0 containers: []
	W0729 16:44:40.458515    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:40.458577    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:40.469359    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:40.469377    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:40.469383    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:40.481327    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:40.481341    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:40.507553    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:40.507561    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:40.546097    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:40.546106    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:40.562521    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:40.562533    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:40.576266    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:40.576277    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:40.587603    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:40.587616    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:40.604703    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:40.604715    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:40.616273    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:40.616286    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:40.629911    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:40.629920    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:40.641226    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:40.641240    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:40.652236    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:40.652246    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:40.656293    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:40.656302    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:40.690645    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:40.690657    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:40.703315    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:40.703328    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:40.717687    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:40.717696    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:43.230969    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:48.233655    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:48.233946    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:48.258626    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:48.258741    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:48.280939    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:48.281009    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:48.292601    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:48.292673    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:48.303236    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:48.303304    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:48.316693    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:48.316768    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:48.327028    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:48.327095    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:48.337423    4389 logs.go:276] 0 containers: []
	W0729 16:44:48.337435    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:48.337499    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:48.347457    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:48.347473    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:48.347479    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:48.359129    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:48.359141    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:48.385122    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:48.385132    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:48.403493    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:48.403507    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:48.414687    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:48.414702    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:48.419162    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:48.419171    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:48.432820    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:48.432831    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:48.445123    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:48.445134    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:48.456800    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:48.456811    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:48.493936    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:48.493946    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:48.515404    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:48.515417    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:48.533194    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:48.533207    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:48.544348    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:48.544364    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:48.583442    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:48.583449    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:48.597248    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:48.597259    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:48.609539    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:48.609553    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:51.123521    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:56.126311    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:56.127143    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:56.168578    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:44:56.168724    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:56.190850    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:44:56.190974    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:56.205418    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:44:56.205492    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:56.217307    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:44:56.217380    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:56.230095    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:44:56.230165    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:56.244199    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:44:56.244271    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:56.254665    4389 logs.go:276] 0 containers: []
	W0729 16:44:56.254681    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:56.254740    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:56.265251    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:44:56.265266    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:56.265273    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:56.302655    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:44:56.302666    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:44:56.314095    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:56.314108    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:56.339430    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:44:56.339438    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:44:56.351347    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:44:56.351356    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:44:56.362774    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:44:56.362789    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:44:56.375176    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:44:56.375186    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:44:56.392345    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:44:56.392360    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:44:56.406858    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:44:56.406873    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:44:56.418665    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:44:56.418679    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:44:56.433149    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:44:56.433162    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:44:56.451763    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:44:56.451777    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:56.463351    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:56.463361    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:56.504887    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:56.504895    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:56.508806    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:44:56.508812    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:44:56.527234    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:44:56.527247    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:44:59.040288    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:04.042435    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:04.042496    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:04.055264    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:04.055333    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:04.066376    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:04.066442    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:04.077057    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:04.077127    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:04.087386    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:04.087459    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:04.097787    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:04.097851    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:04.108158    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:04.108224    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:04.121320    4389 logs.go:276] 0 containers: []
	W0729 16:45:04.121331    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:04.121391    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:04.131281    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:04.131300    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:04.131306    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:04.145346    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:04.145359    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:04.162419    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:04.162428    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:04.173830    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:04.173841    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:04.178033    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:04.178042    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:04.188854    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:04.188863    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:04.200497    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:04.200507    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:04.215807    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:04.215816    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:04.260945    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:04.260953    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:04.295126    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:04.295136    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:04.309154    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:04.309166    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:04.323846    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:04.323857    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:04.347786    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:04.347792    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:45:04.360005    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:04.360015    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:04.374386    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:04.374396    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:04.390624    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:04.390634    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:06.905319    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:11.906115    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:11.906252    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:11.918858    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:11.918936    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:11.931167    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:11.931235    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:11.943366    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:11.943441    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:11.954315    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:11.954391    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:11.965550    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:11.965628    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:11.976909    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:11.976981    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:11.987778    4389 logs.go:276] 0 containers: []
	W0729 16:45:11.987791    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:11.987859    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:12.004059    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:12.004076    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:12.004083    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:12.016329    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:12.016339    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:12.028554    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:12.028565    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:45:12.042143    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:12.042155    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:12.056884    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:12.056896    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:12.069053    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:12.069064    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:12.080443    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:12.080453    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:12.114849    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:12.114860    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:12.127252    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:12.127261    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:12.141317    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:12.141327    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:12.158171    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:12.158181    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:12.176884    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:12.176894    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:12.200734    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:12.200742    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:12.240911    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:12.240918    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:12.245215    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:12.245222    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:12.259736    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:12.259748    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:14.776068    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:19.778363    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:19.778489    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:19.789956    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:19.790036    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:19.800953    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:19.801036    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:19.813280    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:19.813348    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:19.823447    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:19.823511    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:19.834180    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:19.834250    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:19.845403    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:19.845474    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:19.855958    4389 logs.go:276] 0 containers: []
	W0729 16:45:19.855968    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:19.856027    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:19.866736    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:19.866751    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:19.866757    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:19.881980    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:19.881994    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:19.924891    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:19.924903    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:19.939353    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:19.939363    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:19.952270    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:19.952282    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:19.963744    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:19.963755    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:19.975943    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:19.975955    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:20.010915    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:20.010926    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:45:20.024022    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:20.024033    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:20.038864    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:20.038878    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:20.051283    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:20.051294    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:20.064368    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:20.064379    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:20.086649    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:20.086662    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:20.111264    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:20.111271    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:20.115705    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:20.115712    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:20.129550    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:20.129560    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:22.650895    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:27.653253    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:27.653705    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:27.694075    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:27.694224    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:27.715726    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:27.715854    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:27.732100    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:27.732183    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:27.744579    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:27.744645    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:27.755237    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:27.755300    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:27.767015    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:27.767075    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:27.777202    4389 logs.go:276] 0 containers: []
	W0729 16:45:27.777213    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:27.777282    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:27.797495    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:27.797513    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:27.797522    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:27.809453    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:27.809466    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:27.821024    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:27.821036    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:27.832434    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:27.832448    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:27.844536    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:27.844547    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:27.885130    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:27.885141    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:27.899306    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:27.899318    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:45:27.912220    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:27.912233    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:27.926773    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:27.926784    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:27.931517    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:27.931527    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:27.966186    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:27.966199    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:27.981037    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:27.981047    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:27.998493    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:27.998506    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:28.011092    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:28.011104    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:28.035093    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:28.035100    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:28.049410    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:28.049422    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:30.565864    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:35.568199    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:35.568371    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:35.582256    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:35.582320    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:35.593456    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:35.593529    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:35.604279    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:35.604345    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:35.615925    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:35.616006    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:35.627759    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:35.627830    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:35.639382    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:35.639454    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:35.650866    4389 logs.go:276] 0 containers: []
	W0729 16:45:35.650881    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:35.650940    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:35.665618    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:35.665638    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:35.665644    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:35.709929    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:35.709942    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:35.714579    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:35.714587    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:35.729674    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:35.729686    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:35.742016    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:35.742030    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:35.786097    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:35.786112    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:35.800810    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:35.800822    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:45:35.813988    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:35.814002    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:35.828096    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:35.828109    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:35.839627    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:35.839640    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:35.854991    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:35.855001    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:35.880307    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:35.880319    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:35.894449    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:35.894464    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:35.912718    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:35.912729    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:35.924787    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:35.924798    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:35.939804    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:35.939820    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:38.453822    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:43.456082    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:43.456197    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:43.472131    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:43.472210    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:43.485029    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:43.485103    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:43.497811    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:43.497882    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:43.509404    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:43.509477    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:43.525844    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:43.525917    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:43.536695    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:43.536761    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:43.547216    4389 logs.go:276] 0 containers: []
	W0729 16:45:43.547228    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:43.547285    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:43.558650    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:43.558668    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:43.558674    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:43.571084    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:43.571096    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:43.575888    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:43.575898    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:43.590414    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:43.590427    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:43.606048    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:43.606064    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:43.618997    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:43.619011    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:43.663433    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:43.663456    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:43.701195    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:43.701209    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:45:43.715352    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:43.715367    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:43.741586    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:43.741602    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:43.754969    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:43.754981    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:43.771878    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:43.771888    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:43.784262    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:43.784277    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:43.799310    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:43.799322    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:43.811662    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:43.811676    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:43.831131    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:43.831145    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:46.345500    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:51.348053    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:51.348297    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:51.376636    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:51.376759    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:51.393828    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:51.393916    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:51.407758    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:51.407838    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:51.419380    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:51.419445    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:51.430317    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:51.430389    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:51.441095    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:51.441161    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:51.451266    4389 logs.go:276] 0 containers: []
	W0729 16:45:51.451277    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:51.451334    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:51.461562    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:51.461577    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:51.461584    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:51.473499    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:51.473510    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:51.492348    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:51.492359    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:51.515516    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:51.515528    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:45:51.530837    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:51.530851    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:51.549873    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:51.549886    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:51.564824    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:51.564834    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:51.581982    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:51.581996    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:51.593761    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:51.593772    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:51.606119    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:51.606130    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:51.610838    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:51.610847    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:51.645786    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:51.645795    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:51.660813    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:51.660821    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:51.672312    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:51.672323    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:51.716988    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:51.716998    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:51.731729    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:51.731744    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:54.246856    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:59.249493    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:59.249688    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:59.270108    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:45:59.270192    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:59.284071    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:45:59.284137    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:59.296558    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:45:59.296632    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:59.308143    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:45:59.308221    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:59.318444    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:45:59.318508    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:59.328801    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:45:59.328871    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:59.339590    4389 logs.go:276] 0 containers: []
	W0729 16:45:59.339602    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:59.339658    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:59.350093    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:45:59.350109    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:59.350116    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:59.386347    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:45:59.386363    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:45:59.401190    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:45:59.401201    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:45:59.413277    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:59.413287    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:59.419546    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:45:59.419556    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:45:59.454253    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:45:59.454269    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:45:59.473556    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:45:59.473569    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:45:59.485192    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:59.485202    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:59.509909    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:59.509917    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:59.552172    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:45:59.552182    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:45:59.569739    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:45:59.569749    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:45:59.581303    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:45:59.581314    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:59.593858    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:45:59.593870    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:45:59.608847    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:45:59.608859    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:45:59.642870    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:45:59.642882    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:45:59.654015    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:45:59.654025    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:02.168371    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:07.170936    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:07.171045    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:07.185359    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:07.185432    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:07.196695    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:07.196771    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:07.207199    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:07.207270    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:07.218238    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:07.218308    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:07.234337    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:07.234405    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:07.245002    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:07.245066    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:07.255502    4389 logs.go:276] 0 containers: []
	W0729 16:46:07.255513    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:07.255571    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:07.266556    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:07.266573    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:07.266579    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:07.291596    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:07.291602    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:07.305481    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:07.305492    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:07.324441    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:07.324451    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:07.336704    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:07.336714    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:07.351355    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:07.351365    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:07.363426    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:07.363435    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:07.376098    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:07.376110    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:07.387256    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:07.387267    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:07.399209    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:07.399221    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:07.410481    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:07.410496    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:07.414871    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:07.414877    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:07.451585    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:07.451597    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:07.470410    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:07.470421    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:07.482172    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:07.482183    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:07.522352    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:07.522362    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:10.045349    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:15.046146    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:15.046302    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:15.057379    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:15.057451    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:15.068527    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:15.068602    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:15.079101    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:15.079172    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:15.092445    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:15.092518    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:15.103718    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:15.103792    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:15.114038    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:15.114110    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:15.123897    4389 logs.go:276] 0 containers: []
	W0729 16:46:15.123907    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:15.123966    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:15.134203    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:15.134221    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:15.134228    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:15.138501    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:15.138507    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:15.150943    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:15.150952    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:15.165620    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:15.165642    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:15.177578    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:15.177590    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:15.193278    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:15.193288    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:15.204534    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:15.204548    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:15.234815    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:15.234826    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:15.272961    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:15.272972    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:15.291695    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:15.291706    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:15.307186    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:15.307197    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:15.318873    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:15.318882    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:15.359584    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:15.359592    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:15.373784    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:15.373794    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:15.385582    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:15.385592    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:15.407643    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:15.407652    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:17.933963    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:22.936283    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:22.936499    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:22.960124    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:22.960229    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:22.978502    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:22.978581    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:22.991066    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:22.991137    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:23.001908    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:23.001982    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:23.012563    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:23.012631    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:23.026883    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:23.026955    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:23.037048    4389 logs.go:276] 0 containers: []
	W0729 16:46:23.037064    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:23.037118    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:23.047888    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:23.047903    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:23.047909    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:23.052967    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:23.052973    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:23.067039    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:23.067049    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:23.085158    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:23.085168    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:23.102147    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:23.102158    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:23.113777    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:23.113787    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:23.138224    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:23.138232    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:23.172847    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:23.172857    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:23.186006    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:23.186015    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:23.203269    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:23.203279    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:23.215581    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:23.215592    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:23.229876    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:23.229889    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:23.244873    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:23.244884    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:23.261578    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:23.261587    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:23.300462    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:23.300476    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:23.311419    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:23.311430    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:25.824561    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:30.826666    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:30.826796    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:30.839366    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:30.839442    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:30.851921    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:30.851998    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:30.864772    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:30.864840    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:30.875981    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:30.876055    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:30.886709    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:30.886780    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:30.898188    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:30.898260    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:30.909990    4389 logs.go:276] 0 containers: []
	W0729 16:46:30.910002    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:30.910065    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:30.920829    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:30.920846    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:30.920852    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:30.962620    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:30.962635    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:30.976223    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:30.976237    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:30.989402    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:30.989414    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:30.994029    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:30.994037    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:31.009878    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:31.009892    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:31.028332    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:31.028349    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:31.040426    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:31.040438    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:31.080276    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:31.080291    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:31.094934    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:31.094945    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:31.110971    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:31.110983    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:31.130551    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:31.130561    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:31.142127    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:31.142141    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:31.154521    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:31.154533    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:31.166513    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:31.166524    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:31.178187    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:31.178200    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:33.705216    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:38.706398    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:38.706557    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:38.719257    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:38.719339    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:38.737509    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:38.737588    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:38.748203    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:38.748273    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:38.758490    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:38.758562    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:38.768904    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:38.768977    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:38.779348    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:38.779420    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:38.790109    4389 logs.go:276] 0 containers: []
	W0729 16:46:38.790120    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:38.790179    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:38.801069    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:38.801085    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:38.801092    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:38.805335    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:38.805343    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:38.839045    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:38.839059    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:38.854217    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:38.854226    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:38.867552    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:38.867563    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:38.879487    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:38.879498    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:38.891723    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:38.891734    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:38.906053    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:38.906064    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:38.919955    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:38.919966    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:38.931281    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:38.931292    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:38.943125    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:38.943136    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:38.967203    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:38.967210    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:39.009053    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:39.009064    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:39.021571    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:39.021582    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:39.036277    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:39.036288    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:39.048801    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:39.048812    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:41.568029    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:46.570249    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:46.570421    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:46.581578    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:46.581655    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:46.592846    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:46.592915    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:46.603424    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:46.603508    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:46.614244    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:46.614317    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:46.626187    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:46.626253    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:46.640141    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:46.640208    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:46.651233    4389 logs.go:276] 0 containers: []
	W0729 16:46:46.651242    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:46.651311    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:46.661671    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:46.661687    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:46.661693    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:46.703433    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:46.703444    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:46.717375    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:46.717386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:46.732661    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:46.732672    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:46.744293    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:46.744304    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:46.756213    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:46.756224    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:46.761072    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:46.761079    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:46.775329    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:46.775340    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:46.787079    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:46.787090    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:46.798637    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:46.798648    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:46.822055    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:46.822064    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:46.855985    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:46.855995    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:46.870160    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:46.870173    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:46.882850    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:46.882861    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:46.894860    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:46.894871    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:46.914255    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:46.914266    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:49.427409    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:54.429565    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:54.429759    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:54.447609    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:54.447692    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:54.460886    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:54.460961    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:54.472398    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:54.472473    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:54.483461    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:54.483530    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:54.499196    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:54.499268    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:54.511978    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:54.512044    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:54.529002    4389 logs.go:276] 0 containers: []
	W0729 16:46:54.529013    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:54.529073    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:54.539262    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:54.539279    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:54.539284    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:54.563741    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:54.563759    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:54.576028    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:54.576046    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:54.580978    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:54.580985    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:54.616342    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:54.616355    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:54.630637    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:54.630648    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:54.649608    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:54.649619    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:54.661495    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:54.661506    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:54.673955    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:54.673965    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:54.691407    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:54.691417    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:54.703117    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:54.703131    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:54.714989    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:54.715002    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:54.754074    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:54.754081    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:54.768049    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:54.768058    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:54.779345    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:54.779355    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:54.794230    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:54.794239    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:57.307777    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:02.310494    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:02.310579    4389 kubeadm.go:597] duration metric: took 4m3.559144333s to restartPrimaryControlPlane
	W0729 16:47:02.310631    4389 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:47:02.310652    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:47:03.299426    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:47:03.304218    4389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:47:03.306985    4389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:47:03.309663    4389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:47:03.309668    4389 kubeadm.go:157] found existing configuration files:
	
	I0729 16:47:03.309692    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/admin.conf
	I0729 16:47:03.312454    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:47:03.312483    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:47:03.315342    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/kubelet.conf
	I0729 16:47:03.317626    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:47:03.317648    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:47:03.320678    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/controller-manager.conf
	I0729 16:47:03.323400    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:47:03.323421    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:47:03.325974    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/scheduler.conf
	I0729 16:47:03.329074    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:47:03.329096    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:47:03.332233    4389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:47:03.352242    4389 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:47:03.352277    4389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:47:03.401829    4389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:47:03.401886    4389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:47:03.401941    4389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:47:03.449822    4389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:47:03.454002    4389 out.go:204]   - Generating certificates and keys ...
	I0729 16:47:03.454036    4389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:47:03.454078    4389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:47:03.454122    4389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:47:03.454159    4389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:47:03.454199    4389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:47:03.454228    4389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:47:03.454268    4389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:47:03.454303    4389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:47:03.454351    4389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:47:03.454397    4389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:47:03.454422    4389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:47:03.454454    4389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:47:03.514753    4389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:47:03.553967    4389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:47:03.613309    4389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:47:03.723956    4389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:47:03.753530    4389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:47:03.754629    4389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:47:03.754656    4389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:47:03.822971    4389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:47:03.826006    4389 out.go:204]   - Booting up control plane ...
	I0729 16:47:03.826051    4389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:47:03.826095    4389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:47:03.826165    4389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:47:03.826287    4389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:47:03.826390    4389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:47:07.826351    4389 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001776 seconds
	I0729 16:47:07.826410    4389 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:47:07.830741    4389 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:47:08.343913    4389 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:47:08.344183    4389 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-980000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:47:08.848874    4389 kubeadm.go:310] [bootstrap-token] Using token: f3lwuj.pt0shg6ftprwpz00
	I0729 16:47:08.855150    4389 out.go:204]   - Configuring RBAC rules ...
	I0729 16:47:08.855206    4389 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:47:08.855248    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:47:08.856706    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:47:08.857549    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:47:08.858487    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:47:08.859410    4389 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:47:08.862685    4389 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:47:09.037515    4389 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:47:09.253460    4389 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:47:09.254048    4389 kubeadm.go:310] 
	I0729 16:47:09.254077    4389 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:47:09.254080    4389 kubeadm.go:310] 
	I0729 16:47:09.254121    4389 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:47:09.254123    4389 kubeadm.go:310] 
	I0729 16:47:09.254135    4389 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:47:09.254163    4389 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:47:09.254201    4389 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:47:09.254231    4389 kubeadm.go:310] 
	I0729 16:47:09.254260    4389 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:47:09.254263    4389 kubeadm.go:310] 
	I0729 16:47:09.254288    4389 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:47:09.254303    4389 kubeadm.go:310] 
	I0729 16:47:09.254333    4389 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:47:09.254401    4389 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:47:09.254441    4389 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:47:09.254444    4389 kubeadm.go:310] 
	I0729 16:47:09.254524    4389 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:47:09.254612    4389 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:47:09.254617    4389 kubeadm.go:310] 
	I0729 16:47:09.254661    4389 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f3lwuj.pt0shg6ftprwpz00 \
	I0729 16:47:09.254712    4389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 \
	I0729 16:47:09.254723    4389 kubeadm.go:310] 	--control-plane 
	I0729 16:47:09.254726    4389 kubeadm.go:310] 
	I0729 16:47:09.254767    4389 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:47:09.254777    4389 kubeadm.go:310] 
	I0729 16:47:09.254825    4389 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f3lwuj.pt0shg6ftprwpz00 \
	I0729 16:47:09.254872    4389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 
	I0729 16:47:09.254938    4389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:47:09.254947    4389 cni.go:84] Creating CNI manager for ""
	I0729 16:47:09.254956    4389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:47:09.259190    4389 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:47:09.267225    4389 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:47:09.273305    4389 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:47:09.279422    4389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:47:09.279486    4389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:47:09.279504    4389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-980000 minikube.k8s.io/updated_at=2024_07_29T16_47_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3 minikube.k8s.io/name=running-upgrade-980000 minikube.k8s.io/primary=true
	I0729 16:47:09.317133    4389 ops.go:34] apiserver oom_adj: -16
	I0729 16:47:09.318018    4389 kubeadm.go:1113] duration metric: took 38.586792ms to wait for elevateKubeSystemPrivileges
	I0729 16:47:09.318028    4389 kubeadm.go:394] duration metric: took 4m10.580784083s to StartCluster
	I0729 16:47:09.318038    4389 settings.go:142] acquiring lock: {Name:mk3b097bc26d2850dd7467a616788f5486d088c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:09.318127    4389 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:47:09.318545    4389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:09.318758    4389 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:47:09.318812    4389 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:47:09.318849    4389 config.go:182] Loaded profile config "running-upgrade-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:47:09.318853    4389 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-980000"
	I0729 16:47:09.318868    4389 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-980000"
	W0729 16:47:09.318871    4389 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:47:09.318884    4389 host.go:66] Checking if "running-upgrade-980000" exists ...
	I0729 16:47:09.318875    4389 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-980000"
	I0729 16:47:09.318913    4389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-980000"
	I0729 16:47:09.319776    4389 kapi.go:59] client config for running-upgrade-980000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/client.key", CAFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ef4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:47:09.319894    4389 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-980000"
	W0729 16:47:09.319899    4389 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:47:09.319906    4389 host.go:66] Checking if "running-upgrade-980000" exists ...
	I0729 16:47:09.323189    4389 out.go:177] * Verifying Kubernetes components...
	I0729 16:47:09.323544    4389 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:47:09.326251    4389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:47:09.326258    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	I0729 16:47:09.329128    4389 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:47:09.333203    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:47:09.337167    4389 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:47:09.337173    4389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:47:09.337179    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	I0729 16:47:09.411124    4389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:47:09.415897    4389 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:47:09.415936    4389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:47:09.419610    4389 api_server.go:72] duration metric: took 100.841375ms to wait for apiserver process to appear ...
	I0729 16:47:09.419617    4389 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:47:09.419623    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:09.458073    4389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:47:09.475117    4389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:47:14.421664    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:14.421707    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:19.421933    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:19.421976    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:24.422258    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:24.422310    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:29.422711    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:29.422752    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:34.423322    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:34.423374    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:39.424033    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:39.424060    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:47:39.794562    4389 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:47:39.798767    4389 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:47:39.806688    4389 addons.go:510] duration metric: took 30.488316416s for enable addons: enabled=[storage-provisioner]
	I0729 16:47:44.425009    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:44.425064    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:49.426300    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:49.426340    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:54.427900    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:54.427942    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:59.429921    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:59.429942    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:04.430214    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:04.430245    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:09.432398    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:09.432527    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:09.443338    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:09.443407    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:09.454023    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:09.454098    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:09.464713    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:09.464783    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:09.474919    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:09.474988    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:09.485239    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:09.485314    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:09.495642    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:09.495721    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:09.506177    4389 logs.go:276] 0 containers: []
	W0729 16:48:09.506188    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:09.506251    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:09.516551    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:09.516566    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:09.516572    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:09.541573    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:09.541586    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:09.553411    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:09.553424    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:09.558262    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:09.558271    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:09.569908    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:09.569922    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:09.583989    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:09.583999    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:09.598068    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:09.598079    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:09.613321    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:09.613332    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:09.625330    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:09.625340    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:09.643359    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:09.643369    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:09.654749    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:09.654760    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:09.689169    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:09.689181    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:09.723751    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:09.723762    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:12.240271    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:17.242604    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:17.242824    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:17.258565    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:17.258655    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:17.271060    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:17.271132    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:17.281972    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:17.282051    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:17.293593    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:17.293659    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:17.305321    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:17.305390    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:17.315751    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:17.315816    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:17.325965    4389 logs.go:276] 0 containers: []
	W0729 16:48:17.325975    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:17.326029    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:17.336699    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:17.336713    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:17.336718    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:17.348183    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:17.348194    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:17.366871    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:17.366881    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:17.379230    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:17.379241    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:17.396831    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:17.396844    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:17.408115    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:17.408126    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:17.412480    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:17.412486    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:17.426186    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:17.426196    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:17.440660    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:17.440671    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:17.452057    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:17.452067    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:17.476441    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:17.476449    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:17.488053    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:17.488067    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:17.520917    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:17.520924    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:20.058707    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:25.060882    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:25.061089    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:25.083878    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:25.083966    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:25.096128    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:25.096195    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:25.107460    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:25.107537    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:25.118089    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:25.118152    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:25.132427    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:25.132503    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:25.142855    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:25.142923    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:25.153429    4389 logs.go:276] 0 containers: []
	W0729 16:48:25.153441    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:25.153502    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:25.165014    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:25.165027    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:25.165032    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:25.199825    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:25.199833    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:25.238928    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:25.238940    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:25.250980    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:25.250992    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:25.262710    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:25.262722    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:25.287477    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:25.287485    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:25.298765    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:25.298780    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:25.303239    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:25.303246    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:25.317200    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:25.317213    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:25.330909    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:25.330922    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:25.342326    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:25.342339    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:25.356648    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:25.356663    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:25.373806    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:25.373820    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:27.887225    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:32.888894    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:32.889033    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:32.900078    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:32.900155    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:32.910469    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:32.910534    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:32.921134    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:32.921207    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:32.931500    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:32.931568    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:32.949740    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:32.949814    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:32.960521    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:32.960590    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:32.971457    4389 logs.go:276] 0 containers: []
	W0729 16:48:32.971467    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:32.971523    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:32.981868    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:32.981884    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:32.981890    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:32.986923    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:32.986930    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:33.021808    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:33.021830    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:33.036519    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:33.036530    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:33.052567    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:33.052578    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:33.063901    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:33.063910    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:33.088367    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:33.088375    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:33.123378    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:33.123386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:33.142572    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:33.142584    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:33.157113    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:33.157124    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:33.169044    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:33.169055    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:33.186870    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:33.186888    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:33.198712    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:33.198723    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:35.711983    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:40.714120    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:40.714310    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:40.730318    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:40.730413    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:40.747526    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:40.747601    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:40.757987    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:40.758060    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:40.768486    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:40.768558    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:40.779066    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:40.779140    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:40.789625    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:40.789699    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:40.801825    4389 logs.go:276] 0 containers: []
	W0729 16:48:40.801836    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:40.801914    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:40.812281    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:40.812295    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:40.812300    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:40.826089    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:40.826098    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:40.837521    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:40.837536    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:40.853015    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:40.853026    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:40.870303    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:40.870312    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:40.885816    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:40.885827    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:40.923220    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:40.923229    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:40.937814    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:40.937824    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:40.949838    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:40.949852    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:40.964720    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:40.964729    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:40.976184    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:40.976195    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:41.001516    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:41.001535    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:41.036245    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:41.036257    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:43.543431    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:48.543911    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:48.544161    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:48.564992    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:48.565091    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:48.582434    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:48.582502    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:48.594511    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:48.594590    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:48.605949    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:48.606012    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:48.617818    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:48.617882    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:48.629819    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:48.629882    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:48.640592    4389 logs.go:276] 0 containers: []
	W0729 16:48:48.640603    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:48.640654    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:48.651026    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:48.651042    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:48.651047    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:48.665457    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:48.665467    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:48.679066    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:48.679076    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:48.690557    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:48.690570    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:48.702563    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:48.702578    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:48.724547    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:48.724559    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:48.737686    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:48.737696    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:48.742387    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:48.742394    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:48.777871    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:48.777882    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:48.801827    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:48.801839    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:48.816287    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:48.816297    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:48.828265    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:48.828275    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:48.862628    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:48.862645    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:51.376858    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:56.379067    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:56.379229    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:56.393648    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:56.393729    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:56.404527    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:56.404601    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:56.414948    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:56.415020    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:56.425322    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:56.425384    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:56.435616    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:56.435693    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:56.445994    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:56.446078    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:56.456916    4389 logs.go:276] 0 containers: []
	W0729 16:48:56.456927    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:56.456981    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:56.467505    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:56.467519    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:56.467524    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:56.502742    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:56.502755    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:56.514185    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:56.514199    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:56.528941    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:56.528951    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:56.540359    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:56.540372    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:56.557435    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:56.557446    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:56.568958    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:56.568970    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:56.604898    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:56.604910    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:56.609716    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:56.609724    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:56.624200    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:56.624212    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:56.638326    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:56.638338    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:56.650281    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:56.650295    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:56.662550    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:56.662561    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:59.187637    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:04.189908    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:04.190188    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:04.213225    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:04.213350    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:04.229599    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:04.229704    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:04.242540    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:49:04.242615    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:04.255530    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:04.255603    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:04.265782    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:04.265848    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:04.276273    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:04.276348    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:04.288198    4389 logs.go:276] 0 containers: []
	W0729 16:49:04.288212    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:04.288272    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:04.298482    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:04.298495    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:04.298501    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:04.310562    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:04.310573    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:04.322751    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:04.322761    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:04.341766    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:04.341777    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:04.360023    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:04.360034    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:04.395781    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:04.395790    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:04.399973    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:04.399983    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:04.414465    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:04.414476    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:04.428316    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:04.428325    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:04.453878    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:04.453886    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:04.465567    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:04.465577    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:04.499212    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:04.499223    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:04.513867    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:04.513876    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:07.027768    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:12.030248    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:12.030435    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:12.050963    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:12.051080    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:12.066252    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:12.066322    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:12.078830    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:49:12.078906    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:12.089592    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:12.089664    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:12.100863    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:12.100931    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:12.111798    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:12.111891    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:12.130122    4389 logs.go:276] 0 containers: []
	W0729 16:49:12.130134    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:12.130197    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:12.140701    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:12.140716    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:12.140722    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:12.174403    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:12.174412    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:12.190999    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:12.191010    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:12.211376    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:12.211386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:12.228685    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:12.228696    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:12.240332    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:12.240341    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:12.255262    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:12.255275    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:12.266904    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:12.266913    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:12.291879    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:12.291889    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:12.296725    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:12.296735    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:12.333076    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:12.333087    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:12.348523    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:12.348533    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:12.362197    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:12.362208    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:14.875512    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:19.877615    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:19.877753    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:19.889359    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:19.889438    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:19.899784    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:19.899892    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:19.917899    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:49:19.917962    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:19.928262    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:19.928321    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:19.939138    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:19.939213    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:19.949274    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:19.949338    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:19.959450    4389 logs.go:276] 0 containers: []
	W0729 16:49:19.959458    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:19.959518    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:19.970074    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:19.970086    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:19.970091    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:19.981336    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:19.981345    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:20.004182    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:20.004194    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:20.008930    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:20.008939    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:20.042997    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:20.043011    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:20.057136    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:20.057149    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:20.068986    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:20.069000    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:20.080360    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:20.080369    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:20.097814    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:20.097824    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:20.109410    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:20.109419    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:20.144759    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:20.144771    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:20.159432    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:20.159441    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:20.174251    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:20.174264    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:22.687705    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:27.689784    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:27.690145    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:27.714140    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:27.714234    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:27.730598    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:27.730678    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:27.743810    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:27.743880    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:27.758837    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:27.758910    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:27.770394    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:27.770453    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:27.780835    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:27.780900    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:27.790865    4389 logs.go:276] 0 containers: []
	W0729 16:49:27.790877    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:27.790936    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:27.801580    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:27.801596    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:27.801602    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:27.815508    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:27.815521    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:27.828523    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:27.828534    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:27.845944    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:27.845953    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:27.861866    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:27.861876    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:27.876508    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:27.876518    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:27.888114    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:27.888123    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:27.913433    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:27.913440    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:27.924559    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:27.924570    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:27.938587    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:27.938597    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:27.974546    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:27.974556    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:27.993919    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:27.993929    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:28.005643    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:28.005653    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:28.017320    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:28.017330    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:28.050905    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:28.050914    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:30.557938    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:35.560091    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:35.560360    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:35.583704    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:35.583820    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:35.599828    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:35.599902    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:35.614976    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:35.615058    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:35.626455    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:35.626530    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:35.636792    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:35.636864    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:35.647297    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:35.647364    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:35.657369    4389 logs.go:276] 0 containers: []
	W0729 16:49:35.657380    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:35.657438    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:35.666998    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:35.667013    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:35.667019    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:35.678477    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:35.678487    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:35.695904    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:35.695916    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:35.707833    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:35.707844    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:35.719012    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:35.719024    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:35.732698    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:35.732707    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:35.744542    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:35.744555    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:35.768856    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:35.768866    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:35.783035    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:35.783045    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:35.788171    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:35.788178    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:35.828622    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:35.828635    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:35.842145    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:35.842156    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:35.855211    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:35.855223    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:35.869696    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:35.869706    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:35.881625    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:35.881634    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:38.417433    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:43.419540    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:43.419681    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:43.431380    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:43.431453    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:43.442273    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:43.442346    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:43.452755    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:43.452827    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:43.463333    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:43.463413    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:43.473595    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:43.473664    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:43.483672    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:43.483742    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:43.493898    4389 logs.go:276] 0 containers: []
	W0729 16:49:43.493909    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:43.493969    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:43.504179    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:43.504197    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:43.504202    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:43.516293    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:43.516303    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:43.541380    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:43.541388    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:43.577150    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:43.577160    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:43.593325    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:43.593334    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:43.607423    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:43.607439    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:43.612028    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:43.612035    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:43.626088    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:43.626099    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:43.637856    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:43.637867    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:43.649713    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:43.649724    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:43.661278    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:43.661289    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:43.676615    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:43.676626    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:43.700257    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:43.700268    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:43.712447    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:43.712461    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:43.746029    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:43.746042    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:46.263064    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:51.265272    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:51.265430    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:51.279185    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:51.279266    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:51.295061    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:51.295131    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:51.305897    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:51.305972    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:51.319998    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:51.320073    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:51.331209    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:51.331275    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:51.341859    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:51.341927    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:51.353140    4389 logs.go:276] 0 containers: []
	W0729 16:49:51.353152    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:51.353210    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:51.364220    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:51.364238    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:51.364243    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:51.399400    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:51.399411    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:51.413488    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:51.413498    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:51.428632    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:51.428644    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:51.440827    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:51.440838    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:51.452179    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:51.452191    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:51.464045    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:51.464056    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:51.475580    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:51.475594    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:51.491560    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:51.491570    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:51.524711    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:51.524718    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:51.528834    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:51.528843    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:51.540506    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:51.540517    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:51.558144    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:51.558154    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:51.570402    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:51.570412    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:51.595839    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:51.595846    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:54.109770    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:59.112043    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:59.112234    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:59.129740    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:59.129830    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:59.146396    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:59.146464    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:59.157057    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:59.157127    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:59.168657    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:59.168733    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:59.179248    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:59.179316    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:59.189514    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:59.189581    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:59.200845    4389 logs.go:276] 0 containers: []
	W0729 16:49:59.200857    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:59.200918    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:59.211005    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:59.211024    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:59.211029    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:59.215645    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:59.215654    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:59.229549    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:59.229562    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:59.241330    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:59.241340    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:59.253380    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:59.253391    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:59.272252    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:59.272263    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:59.308733    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:59.308752    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:59.343404    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:59.343415    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:59.358258    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:59.358268    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:59.378882    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:59.378892    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:59.391074    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:59.391088    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:59.415957    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:59.415964    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:59.427406    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:59.427419    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:59.438932    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:59.438944    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:59.452795    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:59.452805    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:01.966221    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:06.966583    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:06.966746    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:06.984093    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:06.984187    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:06.998600    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:06.998667    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:07.009633    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:07.009705    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:07.019874    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:07.019940    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:07.033136    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:07.033211    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:07.044397    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:07.044468    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:07.055280    4389 logs.go:276] 0 containers: []
	W0729 16:50:07.055291    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:07.055349    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:07.065577    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:07.065595    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:07.065600    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:07.077373    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:07.077386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:07.089836    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:07.089850    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:07.108973    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:07.108985    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:07.150860    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:07.150872    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:07.165237    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:07.165247    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:07.181013    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:07.181026    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:07.205295    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:07.205303    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:07.216948    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:07.216959    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:07.238788    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:07.238802    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:07.250917    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:07.250931    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:07.285894    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:07.285903    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:07.290659    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:07.290665    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:07.302713    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:07.302723    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:07.317116    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:07.317126    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:09.835636    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:14.837821    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:14.837987    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:14.849920    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:14.849997    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:14.860625    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:14.860690    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:14.871398    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:14.871471    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:14.881694    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:14.881764    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:14.892534    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:14.892596    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:14.903307    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:14.903379    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:14.912944    4389 logs.go:276] 0 containers: []
	W0729 16:50:14.912955    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:14.913013    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:14.923773    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:14.923790    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:14.923795    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:14.937795    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:14.937807    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:14.949336    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:14.949350    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:14.984604    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:14.984615    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:14.996510    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:14.996523    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:15.019951    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:15.019957    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:15.055586    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:15.055604    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:15.067590    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:15.067601    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:15.078758    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:15.078771    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:15.094042    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:15.094053    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:15.111428    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:15.111439    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:15.124897    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:15.124911    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:15.129640    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:15.129646    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:15.143653    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:15.143664    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:15.155622    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:15.155635    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:17.668959    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:22.671301    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:22.671557    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:22.697766    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:22.697861    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:22.715913    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:22.715991    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:22.729372    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:22.729434    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:22.740682    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:22.740747    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:22.751919    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:22.751987    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:22.762868    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:22.762928    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:22.772955    4389 logs.go:276] 0 containers: []
	W0729 16:50:22.772968    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:22.773020    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:22.783719    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:22.783736    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:22.783740    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:22.795313    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:22.795329    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:22.806846    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:22.806859    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:22.821727    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:22.821740    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:22.839489    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:22.839497    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:22.853940    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:22.853956    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:22.866706    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:22.866717    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:22.879691    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:22.879705    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:22.896965    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:22.896974    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:22.908967    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:22.908981    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:22.934043    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:22.934049    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:22.969539    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:22.969545    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:22.974310    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:22.974317    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:23.008238    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:23.008251    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:23.023006    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:23.023020    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:25.538541    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:30.540153    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:30.540322    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:30.555662    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:30.555751    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:30.567645    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:30.567716    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:30.581176    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:30.581257    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:30.593667    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:30.593738    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:30.605413    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:30.605483    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:30.619775    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:30.619846    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:30.630120    4389 logs.go:276] 0 containers: []
	W0729 16:50:30.630132    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:30.630194    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:30.640620    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:30.640638    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:30.640645    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:30.676675    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:30.676686    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:30.690948    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:30.690960    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:30.702659    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:30.702671    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:30.718335    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:30.718345    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:30.732102    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:30.732114    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:30.767319    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:30.767328    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:30.771826    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:30.771832    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:30.783428    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:30.783439    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:30.795212    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:30.795223    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:30.806217    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:30.806229    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:30.817786    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:30.817798    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:30.843182    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:30.843192    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:30.857795    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:30.857806    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:30.875734    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:30.875745    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:33.389770    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:38.391215    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:38.391712    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:38.433894    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:38.434040    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:38.455220    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:38.455327    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:38.471032    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:38.471116    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:38.483289    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:38.483360    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:38.494390    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:38.494466    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:38.505251    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:38.505326    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:38.516390    4389 logs.go:276] 0 containers: []
	W0729 16:50:38.516402    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:38.516459    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:38.527432    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:38.527452    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:38.527457    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:38.539465    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:38.539478    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:38.551055    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:38.551067    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:38.566205    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:38.566218    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:38.581451    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:38.581463    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:38.596358    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:38.596369    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:38.615606    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:38.615616    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:38.627516    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:38.627531    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:38.640215    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:38.640227    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:38.653163    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:38.653177    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:38.678226    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:38.678241    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:38.714536    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:38.714551    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:38.719206    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:38.719219    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:38.757424    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:38.757437    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:38.770123    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:38.770135    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:41.289671    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:46.290838    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:46.291016    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:46.313719    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:46.313838    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:46.329157    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:46.329231    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:46.342125    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:46.342209    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:46.364383    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:46.364460    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:46.390478    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:46.390555    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:46.401062    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:46.401135    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:46.411014    4389 logs.go:276] 0 containers: []
	W0729 16:50:46.411024    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:46.411084    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:46.422005    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:46.422024    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:46.422029    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:46.438887    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:46.438899    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:46.454501    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:46.454511    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:46.479578    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:46.479588    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:46.491674    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:46.491686    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:46.527625    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:46.527638    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:46.541485    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:46.541496    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:46.555446    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:46.555456    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:46.566514    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:46.566527    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:46.581593    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:46.581608    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:46.586131    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:46.586139    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:46.622718    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:46.622732    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:46.634377    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:46.634387    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:46.646406    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:46.646416    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:46.657962    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:46.657972    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:49.181962    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:54.183509    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:54.183643    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:54.196697    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:54.196774    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:54.209149    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:54.209221    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:54.222411    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:54.222488    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:54.235370    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:54.235446    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:54.248646    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:54.248723    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:54.259995    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:54.260086    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:54.272520    4389 logs.go:276] 0 containers: []
	W0729 16:50:54.272534    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:54.272606    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:54.285074    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:54.285094    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:54.285100    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:54.298779    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:54.298791    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:54.334696    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:54.334709    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:54.371309    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:54.371321    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:54.383055    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:54.383066    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:54.394850    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:54.394865    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:54.406588    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:54.406602    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:54.424416    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:54.424427    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:54.447935    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:54.447943    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:54.459725    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:54.459735    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:54.478326    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:54.478337    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:54.490188    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:54.490203    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:54.494801    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:54.494809    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:54.510184    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:54.510197    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:54.524960    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:54.524974    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:57.039725    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:02.041324    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:02.041539    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:51:02.067308    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:51:02.067429    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:51:02.082301    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:51:02.082387    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:51:02.095124    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:51:02.095199    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:51:02.107727    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:51:02.107795    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:51:02.118458    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:51:02.118529    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:51:02.129096    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:51:02.129171    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:51:02.139986    4389 logs.go:276] 0 containers: []
	W0729 16:51:02.139999    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:51:02.140058    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:51:02.150302    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:51:02.150319    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:51:02.150324    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:51:02.186131    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:51:02.186143    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:51:02.200886    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:51:02.200895    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:51:02.212525    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:51:02.212539    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:51:02.223677    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:51:02.223687    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:51:02.248392    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:51:02.248401    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:51:02.282832    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:51:02.282845    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:51:02.298651    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:51:02.298667    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:51:02.310740    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:51:02.310752    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:51:02.324930    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:51:02.324942    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:51:02.329836    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:51:02.329845    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:51:02.346672    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:51:02.346683    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:51:02.357987    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:51:02.357998    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:51:02.369621    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:51:02.369634    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:51:02.381373    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:51:02.381383    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:51:04.895460    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:09.897674    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:09.902066    4389 out.go:177] 
	W0729 16:51:09.904973    4389 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 16:51:09.904980    4389 out.go:239] * 
	* 
	W0729 16:51:09.905556    4389 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:51:09.915965    4389 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-980000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 16:51:10.011207 -0700 PDT m=+2900.125787585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-980000 -n running-upgrade-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-980000 -n running-upgrade-980000: exit status 2 (15.591793625s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-980000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-890000          | force-systemd-flag-890000 | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-887000              | force-systemd-env-887000  | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-887000           | force-systemd-env-887000  | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT | 29 Jul 24 16:41 PDT |
	| start   | -p docker-flags-935000                | docker-flags-935000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-890000             | force-systemd-flag-890000 | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-890000          | force-systemd-flag-890000 | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT | 29 Jul 24 16:41 PDT |
	| start   | -p cert-expiration-792000             | cert-expiration-792000    | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-935000 ssh               | docker-flags-935000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-935000 ssh               | docker-flags-935000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-935000                | docker-flags-935000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT | 29 Jul 24 16:41 PDT |
	| start   | -p cert-options-940000                | cert-options-940000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-940000 ssh               | cert-options-940000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-940000 -- sudo        | cert-options-940000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-940000                | cert-options-940000       | jenkins | v1.33.1 | 29 Jul 24 16:41 PDT | 29 Jul 24 16:41 PDT |
	| start   | -p running-upgrade-980000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 16:41 PDT | 29 Jul 24 16:42 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-980000             | running-upgrade-980000    | jenkins | v1.33.1 | 29 Jul 24 16:42 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-792000             | cert-expiration-792000    | jenkins | v1.33.1 | 29 Jul 24 16:44 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-792000             | cert-expiration-792000    | jenkins | v1.33.1 | 29 Jul 24 16:44 PDT | 29 Jul 24 16:44 PDT |
	| start   | -p kubernetes-upgrade-569000          | kubernetes-upgrade-569000 | jenkins | v1.33.1 | 29 Jul 24 16:44 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-569000          | kubernetes-upgrade-569000 | jenkins | v1.33.1 | 29 Jul 24 16:45 PDT | 29 Jul 24 16:45 PDT |
	| start   | -p kubernetes-upgrade-569000          | kubernetes-upgrade-569000 | jenkins | v1.33.1 | 29 Jul 24 16:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-569000          | kubernetes-upgrade-569000 | jenkins | v1.33.1 | 29 Jul 24 16:45 PDT | 29 Jul 24 16:45 PDT |
	| start   | -p stopped-upgrade-480000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 16:45 PDT | 29 Jul 24 16:45 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-480000 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 16:45 PDT | 29 Jul 24 16:46 PDT |
	| start   | -p stopped-upgrade-480000             | stopped-upgrade-480000    | jenkins | v1.33.1 | 29 Jul 24 16:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:46:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:46:00.801385    4568 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:00.801552    4568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:00.801558    4568 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:00.801561    4568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:00.801724    4568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:46:00.803042    4568 out.go:298] Setting JSON to false
	I0729 16:46:00.823037    4568 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2723,"bootTime":1722294037,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:46:00.823119    4568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:00.827104    4568 out.go:177] * [stopped-upgrade-480000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:00.835012    4568 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:46:00.835073    4568 notify.go:220] Checking for updates...
	I0729 16:46:00.841958    4568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:46:00.845009    4568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:00.848019    4568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:00.850992    4568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:46:00.853963    4568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:00.857308    4568 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:46:00.859919    4568 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:46:00.862989    4568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:00.866971    4568 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:46:00.874018    4568 start.go:297] selected driver: qemu2
	I0729 16:46:00.874026    4568 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:46:00.874100    4568 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:00.876740    4568 cni.go:84] Creating CNI manager for ""
	I0729 16:46:00.876755    4568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:00.876781    4568 start.go:340] cluster config:
	{Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:46:00.876832    4568 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:00.882931    4568 out.go:177] * Starting "stopped-upgrade-480000" primary control-plane node in "stopped-upgrade-480000" cluster
	I0729 16:46:00.887027    4568 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:46:00.887042    4568 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 16:46:00.887052    4568 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:00.887108    4568 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:00.887113    4568 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 16:46:00.887169    4568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/config.json ...
	I0729 16:46:00.887582    4568 start.go:360] acquireMachinesLock for stopped-upgrade-480000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:00.887614    4568 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "stopped-upgrade-480000"
	I0729 16:46:00.887623    4568 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:00.887628    4568 fix.go:54] fixHost starting: 
	I0729 16:46:00.887729    4568 fix.go:112] recreateIfNeeded on stopped-upgrade-480000: state=Stopped err=<nil>
	W0729 16:46:00.887736    4568 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:00.895954    4568 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-480000" ...
	I0729 16:46:02.168371    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:00.899820    4568 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:00.899880    4568 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50473-:22,hostfwd=tcp::50474-:2376,hostname=stopped-upgrade-480000 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/disk.qcow2
	I0729 16:46:00.944384    4568 main.go:141] libmachine: STDOUT: 
	I0729 16:46:00.944413    4568 main.go:141] libmachine: STDERR: 
	I0729 16:46:00.944419    4568 main.go:141] libmachine: Waiting for VM to start (ssh -p 50473 docker@127.0.0.1)...
	I0729 16:46:07.170936    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:07.171045    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:07.185359    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:07.185432    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:07.196695    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:07.196771    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:07.207199    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:07.207270    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:07.218238    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:07.218308    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:07.234337    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:07.234405    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:07.245002    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:07.245066    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:07.255502    4389 logs.go:276] 0 containers: []
	W0729 16:46:07.255513    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:07.255571    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:07.266556    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:07.266573    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:07.266579    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:07.291596    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:07.291602    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:07.305481    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:07.305492    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:07.324441    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:07.324451    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:07.336704    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:07.336714    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:07.351355    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:07.351365    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:07.363426    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:07.363435    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:07.376098    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:07.376110    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:07.387256    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:07.387267    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:07.399209    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:07.399221    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:07.410481    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:07.410496    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:07.414871    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:07.414877    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:07.451585    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:07.451597    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:07.470410    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:07.470421    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:07.482172    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:07.482183    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:07.522352    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:07.522362    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:10.045349    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:15.046146    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:15.046302    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:15.057379    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:15.057451    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:15.068527    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:15.068602    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:15.079101    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:15.079172    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:15.092445    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:15.092518    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:15.103718    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:15.103792    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:15.114038    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:15.114110    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:15.123897    4389 logs.go:276] 0 containers: []
	W0729 16:46:15.123907    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:15.123966    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:15.134203    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:15.134221    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:15.134228    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:15.138501    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:15.138507    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:15.150943    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:15.150952    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:15.165620    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:15.165642    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:15.177578    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:15.177590    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:15.193278    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:15.193288    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:15.204534    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:15.204548    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:15.234815    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:15.234826    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:15.272961    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:15.272972    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:15.291695    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:15.291706    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:15.307186    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:15.307197    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:15.318873    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:15.318882    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:15.359584    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:15.359592    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:15.373784    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:15.373794    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:15.385582    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:15.385592    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:15.407643    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:15.407652    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:17.933963    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:21.045928    4568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/config.json ...
	I0729 16:46:21.046697    4568 machine.go:94] provisionDockerMachine start ...
	I0729 16:46:21.046857    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.047319    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.047333    4568 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 16:46:21.125876    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 16:46:21.125913    4568 buildroot.go:166] provisioning hostname "stopped-upgrade-480000"
	I0729 16:46:21.126034    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.126269    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.126282    4568 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-480000 && echo "stopped-upgrade-480000" | sudo tee /etc/hostname
	I0729 16:46:21.196331    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-480000
	
	I0729 16:46:21.196442    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.196652    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.196666    4568 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-480000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-480000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-480000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:46:21.255165    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:46:21.255178    4568 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19347-923/.minikube CaCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19347-923/.minikube}
	I0729 16:46:21.255187    4568 buildroot.go:174] setting up certificates
	I0729 16:46:21.255191    4568 provision.go:84] configureAuth start
	I0729 16:46:21.255199    4568 provision.go:143] copyHostCerts
	I0729 16:46:21.255270    4568 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem, removing ...
	I0729 16:46:21.255276    4568 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem
	I0729 16:46:21.255383    4568 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem (1679 bytes)
	I0729 16:46:21.255559    4568 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem, removing ...
	I0729 16:46:21.255563    4568 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem
	I0729 16:46:21.255614    4568 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem (1082 bytes)
	I0729 16:46:21.255708    4568 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem, removing ...
	I0729 16:46:21.255711    4568 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem
	I0729 16:46:21.255759    4568 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem (1123 bytes)
	I0729 16:46:21.255844    4568 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-480000 san=[127.0.0.1 localhost minikube stopped-upgrade-480000]
	I0729 16:46:21.318570    4568 provision.go:177] copyRemoteCerts
	I0729 16:46:21.318606    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:46:21.318613    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:46:21.346264    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 16:46:21.352838    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 16:46:21.359350    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 16:46:21.366489    4568 provision.go:87] duration metric: took 111.29575ms to configureAuth
	I0729 16:46:21.366497    4568 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:46:21.366598    4568 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:46:21.366642    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.366726    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.366730    4568 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 16:46:21.416423    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 16:46:21.416433    4568 buildroot.go:70] root file system type: tmpfs
	I0729 16:46:21.416483    4568 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 16:46:21.416526    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.416628    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.416662    4568 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 16:46:21.471806    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 16:46:21.471850    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.471954    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.471964    4568 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 16:46:21.821203    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 16:46:21.821217    4568 machine.go:97] duration metric: took 774.520583ms to provisionDockerMachine
	I0729 16:46:21.821231    4568 start.go:293] postStartSetup for "stopped-upgrade-480000" (driver="qemu2")
	I0729 16:46:21.821238    4568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:46:21.821308    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:46:21.821319    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:46:21.849137    4568 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:46:21.850597    4568 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 16:46:21.850604    4568 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/addons for local assets ...
	I0729 16:46:21.850693    4568 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/files for local assets ...
	I0729 16:46:21.850807    4568 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem -> 13902.pem in /etc/ssl/certs
	I0729 16:46:21.850933    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 16:46:21.853842    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:46:21.860475    4568 start.go:296] duration metric: took 39.23925ms for postStartSetup
	I0729 16:46:21.860491    4568 fix.go:56] duration metric: took 20.973163542s for fixHost
	I0729 16:46:21.860525    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.860634    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.860638    4568 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 16:46:21.911705    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722296782.197289837
	
	I0729 16:46:21.911712    4568 fix.go:216] guest clock: 1722296782.197289837
	I0729 16:46:21.911716    4568 fix.go:229] Guest: 2024-07-29 16:46:22.197289837 -0700 PDT Remote: 2024-07-29 16:46:21.860493 -0700 PDT m=+21.091105501 (delta=336.796837ms)
	I0729 16:46:21.911727    4568 fix.go:200] guest clock delta is within tolerance: 336.796837ms
	I0729 16:46:21.911729    4568 start.go:83] releasing machines lock for "stopped-upgrade-480000", held for 21.024412208s
	I0729 16:46:21.911784    4568 ssh_runner.go:195] Run: cat /version.json
	I0729 16:46:21.911795    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:46:21.911784    4568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:46:21.911836    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	W0729 16:46:21.912363    4568 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50473: connect: connection refused
	I0729 16:46:21.912384    4568 retry.go:31] will retry after 159.743001ms: dial tcp [::1]:50473: connect: connection refused
	W0729 16:46:22.104824    4568 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 16:46:22.104914    4568 ssh_runner.go:195] Run: systemctl --version
	I0729 16:46:22.107118    4568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:46:22.109140    4568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:46:22.109176    4568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 16:46:22.112413    4568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 16:46:22.117495    4568 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:46:22.117503    4568 start.go:495] detecting cgroup driver to use...
	I0729 16:46:22.117577    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:46:22.124380    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 16:46:22.127464    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 16:46:22.130432    4568 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 16:46:22.130456    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 16:46:22.133672    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:46:22.136381    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 16:46:22.139419    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:46:22.142451    4568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:46:22.145410    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 16:46:22.148134    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 16:46:22.151332    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 16:46:22.154685    4568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:46:22.157441    4568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:46:22.160077    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:22.237440    4568 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 16:46:22.243729    4568 start.go:495] detecting cgroup driver to use...
	I0729 16:46:22.243815    4568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 16:46:22.249080    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:46:22.253861    4568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:46:22.266372    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:46:22.270926    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:46:22.275479    4568 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 16:46:22.335767    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:46:22.341348    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:46:22.347117    4568 ssh_runner.go:195] Run: which cri-dockerd
	I0729 16:46:22.348465    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 16:46:22.351350    4568 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 16:46:22.356064    4568 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 16:46:22.447598    4568 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 16:46:22.536018    4568 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 16:46:22.536087    4568 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 16:46:22.541531    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:22.619354    4568 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:46:23.744162    4568 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.124805041s)
	I0729 16:46:23.744228    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 16:46:23.748505    4568 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 16:46:23.754326    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:46:23.759119    4568 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 16:46:23.843181    4568 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 16:46:23.915397    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:23.998937    4568 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 16:46:24.004876    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:46:24.009454    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:24.086208    4568 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 16:46:24.124729    4568 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 16:46:24.124815    4568 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 16:46:24.127075    4568 start.go:563] Will wait 60s for crictl version
	I0729 16:46:24.127127    4568 ssh_runner.go:195] Run: which crictl
	I0729 16:46:24.128514    4568 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:46:24.142958    4568 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 16:46:24.143038    4568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:46:24.159255    4568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:46:22.936283    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:22.936499    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:22.960124    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:22.960229    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:22.978502    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:22.978581    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:22.991066    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:22.991137    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:23.001908    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:23.001982    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:23.012563    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:23.012631    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:23.026883    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:23.026955    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:23.037048    4389 logs.go:276] 0 containers: []
	W0729 16:46:23.037064    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:23.037118    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:23.047888    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:23.047903    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:23.047909    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:23.052967    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:23.052973    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:23.067039    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:23.067049    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:23.085158    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:23.085168    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:23.102147    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:23.102158    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:23.113777    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:23.113787    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:23.138224    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:23.138232    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:23.172847    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:23.172857    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:23.186006    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:23.186015    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:23.203269    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:23.203279    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:23.215581    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:23.215592    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:23.229876    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:23.229889    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:23.244873    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:23.244884    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:23.261578    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:23.261587    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:23.300462    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:23.300476    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:23.311419    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:23.311430    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:24.180637    4568 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 16:46:24.180705    4568 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 16:46:24.181995    4568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:46:24.185558    4568 kubeadm.go:883] updating cluster {Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 16:46:24.185605    4568 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:46:24.185643    4568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:46:24.196013    4568 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:46:24.196021    4568 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:46:24.196067    4568 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:46:24.199501    4568 ssh_runner.go:195] Run: which lz4
	I0729 16:46:24.200810    4568 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 16:46:24.201962    4568 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:46:24.201974    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 16:46:25.120696    4568 docker.go:649] duration metric: took 919.926292ms to copy over tarball
	I0729 16:46:25.120770    4568 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:46:25.824561    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:26.284532    4568 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.163763834s)
	I0729 16:46:26.284545    4568 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:46:26.299843    4568 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:46:26.303168    4568 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 16:46:26.307827    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:26.387394    4568 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:46:27.958374    4568 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.570984792s)
	I0729 16:46:27.958485    4568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:46:27.973307    4568 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:46:27.973316    4568 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:46:27.973321    4568 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 16:46:27.979149    4568 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:27.980799    4568 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:27.981997    4568 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:27.982109    4568 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:27.983503    4568 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:27.983704    4568 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:27.984773    4568 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:27.984903    4568 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:27.986189    4568 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:27.986212    4568 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:27.987872    4568 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:27.988155    4568 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:46:27.989293    4568 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:27.989723    4568 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:27.990736    4568 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:46:27.991358    4568 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.406798    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:28.417247    4568 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 16:46:28.417281    4568 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:28.417349    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:28.428035    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 16:46:28.436288    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0729 16:46:28.437849    4568 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:46:28.437929    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:28.438015    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:28.440931    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:28.448746    4568 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 16:46:28.448769    4568 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:28.448841    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:28.470447    4568 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 16:46:28.470468    4568 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 16:46:28.470472    4568 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:28.470479    4568 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:28.470527    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:28.470542    4568 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 16:46:28.470552    4568 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:28.470527    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:28.470544    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 16:46:28.470581    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:28.483503    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 16:46:28.487780    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 16:46:28.489525    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:46:28.489591    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 16:46:28.489641    4568 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:46:28.497384    4568 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 16:46:28.497422    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 16:46:28.497463    4568 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 16:46:28.497486    4568 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 16:46:28.497523    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 16:46:28.513261    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.513924    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:46:28.514021    4568 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 16:46:28.542554    4568 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 16:46:28.542557    4568 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 16:46:28.542585    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 16:46:28.542593    4568 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.542641    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.559495    4568 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:46:28.559518    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 16:46:28.566671    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:46:28.602130    4568 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 16:46:28.602151    4568 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 16:46:28.602157    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0729 16:46:28.615204    4568 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:46:28.615321    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:28.636059    4568 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 16:46:28.636102    4568 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 16:46:28.636120    4568 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:28.636177    4568 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:28.649715    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:46:28.649840    4568 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:46:28.651160    4568 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 16:46:28.651171    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 16:46:28.679826    4568 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:46:28.679848    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 16:46:28.916624    4568 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 16:46:28.916661    4568 cache_images.go:92] duration metric: took 943.347959ms to LoadCachedImages
	W0729 16:46:28.916703    4568 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 16:46:28.916709    4568 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 16:46:28.916759    4568 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-480000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:46:28.916820    4568 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 16:46:28.936604    4568 cni.go:84] Creating CNI manager for ""
	I0729 16:46:28.936620    4568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:28.936624    4568 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:46:28.936633    4568 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-480000 NodeName:stopped-upgrade-480000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:46:28.936697    4568 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-480000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:46:28.936756    4568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 16:46:28.939669    4568 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:46:28.939701    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:46:28.942711    4568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 16:46:28.947869    4568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:46:28.952848    4568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 16:46:28.957934    4568 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 16:46:28.959096    4568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:46:28.962999    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:29.048794    4568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:46:29.053878    4568 certs.go:68] Setting up /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000 for IP: 10.0.2.15
	I0729 16:46:29.053890    4568 certs.go:194] generating shared ca certs ...
	I0729 16:46:29.053899    4568 certs.go:226] acquiring lock for ca certs: {Name:mk4279a132dfe000316d0221b0d97d4e537df506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.054074    4568 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key
	I0729 16:46:29.054110    4568 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key
	I0729 16:46:29.054117    4568 certs.go:256] generating profile certs ...
	I0729 16:46:29.054178    4568 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.key
	I0729 16:46:29.054196    4568 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295
	I0729 16:46:29.054205    4568 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 16:46:29.170842    4568 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295 ...
	I0729 16:46:29.170853    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295: {Name:mke6eca6bee11c09e4ec4e59ab31263d0485cd20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.171107    4568 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295 ...
	I0729 16:46:29.171112    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295: {Name:mk62bbe6b816963ecc85c7b294289074aed7a646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.171239    4568 certs.go:381] copying /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295 -> /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt
	I0729 16:46:29.171359    4568 certs.go:385] copying /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295 -> /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key
	I0729 16:46:29.171478    4568 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/proxy-client.key
	I0729 16:46:29.171598    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem (1338 bytes)
	W0729 16:46:29.171620    4568 certs.go:480] ignoring /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390_empty.pem, impossibly tiny 0 bytes
	I0729 16:46:29.171625    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 16:46:29.171643    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem (1082 bytes)
	I0729 16:46:29.171661    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:46:29.171680    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem (1679 bytes)
	I0729 16:46:29.171718    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:46:29.172025    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:46:29.178970    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:46:29.185733    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:46:29.192428    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 16:46:29.199051    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 16:46:29.206033    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:46:29.212621    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:46:29.219192    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 16:46:29.226456    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:46:29.233043    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem --> /usr/share/ca-certificates/1390.pem (1338 bytes)
	I0729 16:46:29.239453    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /usr/share/ca-certificates/13902.pem (1708 bytes)
	I0729 16:46:29.246556    4568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:46:29.251721    4568 ssh_runner.go:195] Run: openssl version
	I0729 16:46:29.253449    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:46:29.256134    4568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:46:29.257616    4568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:46:29.257645    4568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:46:29.259246    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:46:29.262505    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1390.pem && ln -fs /usr/share/ca-certificates/1390.pem /etc/ssl/certs/1390.pem"
	I0729 16:46:29.265613    4568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1390.pem
	I0729 16:46:29.267120    4568 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:11 /usr/share/ca-certificates/1390.pem
	I0729 16:46:29.267143    4568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1390.pem
	I0729 16:46:29.268990    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1390.pem /etc/ssl/certs/51391683.0"
	I0729 16:46:29.271766    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13902.pem && ln -fs /usr/share/ca-certificates/13902.pem /etc/ssl/certs/13902.pem"
	I0729 16:46:29.274990    4568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13902.pem
	I0729 16:46:29.276421    4568 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:11 /usr/share/ca-certificates/13902.pem
	I0729 16:46:29.276439    4568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13902.pem
	I0729 16:46:29.278175    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13902.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 16:46:29.280898    4568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:46:29.282168    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 16:46:29.284100    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 16:46:29.286057    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 16:46:29.287939    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 16:46:29.290040    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 16:46:29.291884    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 16:46:29.293983    4568 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:46:29.294051    4568 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:46:29.304453    4568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:46:29.307543    4568 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 16:46:29.307549    4568 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 16:46:29.307575    4568 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 16:46:29.310381    4568 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:46:29.310655    4568 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-480000" does not appear in /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:46:29.310752    4568 kubeconfig.go:62] /Users/jenkins/minikube-integration/19347-923/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-480000" cluster setting kubeconfig missing "stopped-upgrade-480000" context setting]
	I0729 16:46:29.310929    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.311352    4568 kapi.go:59] client config for stopped-upgrade-480000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.key", CAFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ae0080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:46:29.311665    4568 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 16:46:29.314298    4568 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-480000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 16:46:29.314303    4568 kubeadm.go:1160] stopping kube-system containers ...
	I0729 16:46:29.314340    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:46:29.324215    4568 docker.go:483] Stopping containers: [ea007e6b4743 4866a9c899c6 6b64e4a0a495 df1f20080bd7 405fef0e15b0 bcd664408a20 2aa835c9fd1e a7d1fe2e3558]
	I0729 16:46:29.324282    4568 ssh_runner.go:195] Run: docker stop ea007e6b4743 4866a9c899c6 6b64e4a0a495 df1f20080bd7 405fef0e15b0 bcd664408a20 2aa835c9fd1e a7d1fe2e3558
	I0729 16:46:29.334668    4568 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 16:46:29.340127    4568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:46:29.343205    4568 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:46:29.343214    4568 kubeadm.go:157] found existing configuration files:
	
	I0729 16:46:29.343233    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf
	I0729 16:46:29.346224    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:46:29.346248    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:46:29.348849    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf
	I0729 16:46:29.351306    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:46:29.351331    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:46:29.354309    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf
	I0729 16:46:29.356958    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:46:29.356980    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:46:29.359450    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf
	I0729 16:46:29.362549    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:46:29.362575    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:46:29.365438    4568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:46:29.368193    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.389995    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.803236    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.938108    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.960153    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.984287    4568 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:46:29.984371    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:46:30.485812    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:46:30.826666    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:30.826796    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:30.839366    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:30.839442    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:30.851921    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:30.851998    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:30.864772    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:30.864840    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:30.875981    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:30.876055    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:30.886709    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:30.886780    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:30.898188    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:30.898260    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:30.909990    4389 logs.go:276] 0 containers: []
	W0729 16:46:30.910002    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:30.910065    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:30.920829    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:30.920846    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:30.920852    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:30.962620    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:30.962635    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:30.976223    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:30.976237    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:30.989402    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:30.989414    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:30.994029    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:30.994037    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:31.009878    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:31.009892    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:31.028332    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:31.028349    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:31.040426    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:31.040438    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:31.080276    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:31.080291    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:31.094934    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:31.094945    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:31.110971    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:31.110983    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:31.130551    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:31.130561    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:31.142127    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:31.142141    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:31.154521    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:31.154533    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:31.166513    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:31.166524    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:31.178187    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:31.178200    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:33.705216    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:30.986429    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:46:30.990956    4568 api_server.go:72] duration metric: took 1.006691625s to wait for apiserver process to appear ...
	I0729 16:46:30.990966    4568 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:46:30.990976    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:38.706398    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:38.706557    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:38.719257    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:38.719339    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:38.737509    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:38.737588    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:38.748203    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:38.748273    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:38.758490    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:38.758562    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:38.768904    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:38.768977    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:38.779348    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:38.779420    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:38.790109    4389 logs.go:276] 0 containers: []
	W0729 16:46:38.790120    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:38.790179    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:38.801069    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:38.801085    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:38.801092    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:38.805335    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:38.805343    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:38.839045    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:38.839059    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:38.854217    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:38.854226    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:38.867552    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:38.867563    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:38.879487    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:38.879498    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:38.891723    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:38.891734    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:38.906053    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:38.906064    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:38.919955    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:38.919966    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:38.931281    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:38.931292    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:38.943125    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:38.943136    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:38.967203    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:38.967210    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:39.009053    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:39.009064    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:39.021571    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:39.021582    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:39.036277    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:39.036288    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:39.048801    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:39.048812    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:35.992988    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:35.993016    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:41.568029    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:40.993179    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:40.993219    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:46.570249    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:46.570421    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:46.581578    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:46.581655    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:46.592846    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:46.592915    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:46.603424    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:46.603508    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:46.614244    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:46.614317    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:46.626187    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:46.626253    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:46.640141    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:46.640208    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:46.651233    4389 logs.go:276] 0 containers: []
	W0729 16:46:46.651242    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:46.651311    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:46.661671    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:46.661687    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:46.661693    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:46.703433    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:46.703444    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:46.717375    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:46.717386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:46.732661    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:46.732672    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:46.744293    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:46.744304    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:46.756213    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:46.756224    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:46.761072    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:46.761079    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:46.775329    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:46.775340    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:46.787079    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:46.787090    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:46.798637    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:46.798648    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:46.822055    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:46.822064    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:46.855985    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:46.855995    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:46.870160    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:46.870173    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:46.882850    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:46.882861    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:46.894860    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:46.894871    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:46.914255    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:46.914266    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:49.427409    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:45.993588    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:45.993624    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:54.429565    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:54.429759    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:46:54.447609    4389 logs.go:276] 2 containers: [ce83dfd45139 f945667ff622]
	I0729 16:46:54.447692    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:46:54.460886    4389 logs.go:276] 2 containers: [3623f608bb6a 1b2dfc87f3de]
	I0729 16:46:54.460961    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:46:54.472398    4389 logs.go:276] 1 containers: [6dc4699b82ac]
	I0729 16:46:54.472473    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:46:54.483461    4389 logs.go:276] 2 containers: [27dd028d20fa 7c093af5a7a3]
	I0729 16:46:54.483530    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:46:54.499196    4389 logs.go:276] 1 containers: [4404b14ff031]
	I0729 16:46:54.499268    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:46:54.511978    4389 logs.go:276] 2 containers: [5e50180004b5 f1081b26aebd]
	I0729 16:46:54.512044    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:46:54.529002    4389 logs.go:276] 0 containers: []
	W0729 16:46:54.529013    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:46:54.529073    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:46:54.539262    4389 logs.go:276] 1 containers: [29829f57a242]
	I0729 16:46:54.539279    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:46:54.539284    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:46:54.563741    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:46:54.563759    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:46:54.576028    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:46:54.576046    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:46:54.580978    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:46:54.580985    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:46:54.616342    4389 logs.go:123] Gathering logs for etcd [3623f608bb6a] ...
	I0729 16:46:54.616355    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3623f608bb6a"
	I0729 16:46:54.630637    4389 logs.go:123] Gathering logs for etcd [1b2dfc87f3de] ...
	I0729 16:46:54.630648    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2dfc87f3de"
	I0729 16:46:54.649608    4389 logs.go:123] Gathering logs for kube-proxy [4404b14ff031] ...
	I0729 16:46:54.649619    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4404b14ff031"
	I0729 16:46:54.661495    4389 logs.go:123] Gathering logs for kube-apiserver [f945667ff622] ...
	I0729 16:46:54.661506    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f945667ff622"
	I0729 16:46:54.673955    4389 logs.go:123] Gathering logs for kube-controller-manager [5e50180004b5] ...
	I0729 16:46:54.673965    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e50180004b5"
	I0729 16:46:54.691407    4389 logs.go:123] Gathering logs for kube-controller-manager [f1081b26aebd] ...
	I0729 16:46:54.691417    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1081b26aebd"
	I0729 16:46:54.703117    4389 logs.go:123] Gathering logs for storage-provisioner [29829f57a242] ...
	I0729 16:46:54.703131    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29829f57a242"
	I0729 16:46:54.714989    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:46:54.715002    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:46:54.754074    4389 logs.go:123] Gathering logs for kube-apiserver [ce83dfd45139] ...
	I0729 16:46:54.754081    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce83dfd45139"
	I0729 16:46:54.768049    4389 logs.go:123] Gathering logs for kube-scheduler [27dd028d20fa] ...
	I0729 16:46:54.768058    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27dd028d20fa"
	I0729 16:46:54.779345    4389 logs.go:123] Gathering logs for kube-scheduler [7c093af5a7a3] ...
	I0729 16:46:54.779355    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c093af5a7a3"
	I0729 16:46:54.794230    4389 logs.go:123] Gathering logs for coredns [6dc4699b82ac] ...
	I0729 16:46:54.794239    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dc4699b82ac"
	I0729 16:46:50.994093    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:50.994150    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:57.307777    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:55.994849    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:55.994891    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:02.310494    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:02.310579    4389 kubeadm.go:597] duration metric: took 4m3.559144333s to restartPrimaryControlPlane
	W0729 16:47:02.310631    4389 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:47:02.310652    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:47:03.299426    4389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:47:03.304218    4389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:47:03.306985    4389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:47:03.309663    4389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:47:03.309668    4389 kubeadm.go:157] found existing configuration files:
	
	I0729 16:47:03.309692    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/admin.conf
	I0729 16:47:03.312454    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:47:03.312483    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:47:03.315342    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/kubelet.conf
	I0729 16:47:03.317626    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:47:03.317648    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:47:03.320678    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/controller-manager.conf
	I0729 16:47:03.323400    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:47:03.323421    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:47:03.325974    4389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/scheduler.conf
	I0729 16:47:03.329074    4389 kubeadm.go:163] "https://control-plane.minikube.internal:50302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:47:03.329096    4389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:47:03.332233    4389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:47:03.352242    4389 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:47:03.352277    4389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:47:03.401829    4389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:47:03.401886    4389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:47:03.401941    4389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:47:03.449822    4389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:47:03.454002    4389 out.go:204]   - Generating certificates and keys ...
	I0729 16:47:03.454036    4389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:47:03.454078    4389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:47:03.454122    4389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:47:03.454159    4389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:47:03.454199    4389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:47:03.454228    4389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:47:03.454268    4389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:47:03.454303    4389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:47:03.454351    4389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:47:03.454397    4389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:47:03.454422    4389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:47:03.454454    4389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:47:03.514753    4389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:47:03.553967    4389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:47:03.613309    4389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:47:03.723956    4389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:47:03.753530    4389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:47:03.754629    4389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:47:03.754656    4389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:47:03.822971    4389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:47:03.826006    4389 out.go:204]   - Booting up control plane ...
	I0729 16:47:03.826051    4389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:47:03.826095    4389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:47:03.826165    4389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:47:03.826287    4389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:47:03.826390    4389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:47:00.995651    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:00.995692    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:07.826351    4389 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001776 seconds
	I0729 16:47:07.826410    4389 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:47:07.830741    4389 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:47:08.343913    4389 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:47:08.344183    4389 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-980000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:47:08.848874    4389 kubeadm.go:310] [bootstrap-token] Using token: f3lwuj.pt0shg6ftprwpz00
	I0729 16:47:08.855150    4389 out.go:204]   - Configuring RBAC rules ...
	I0729 16:47:08.855206    4389 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:47:08.855248    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:47:08.856706    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:47:08.857549    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:47:08.858487    4389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:47:08.859410    4389 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:47:08.862685    4389 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:47:09.037515    4389 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:47:09.253460    4389 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:47:09.254048    4389 kubeadm.go:310] 
	I0729 16:47:09.254077    4389 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:47:09.254080    4389 kubeadm.go:310] 
	I0729 16:47:09.254121    4389 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:47:09.254123    4389 kubeadm.go:310] 
	I0729 16:47:09.254135    4389 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:47:09.254163    4389 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:47:09.254201    4389 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:47:09.254231    4389 kubeadm.go:310] 
	I0729 16:47:09.254260    4389 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:47:09.254263    4389 kubeadm.go:310] 
	I0729 16:47:09.254288    4389 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:47:09.254303    4389 kubeadm.go:310] 
	I0729 16:47:09.254333    4389 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:47:09.254401    4389 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:47:09.254441    4389 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:47:09.254444    4389 kubeadm.go:310] 
	I0729 16:47:09.254524    4389 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:47:09.254612    4389 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:47:09.254617    4389 kubeadm.go:310] 
	I0729 16:47:09.254661    4389 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f3lwuj.pt0shg6ftprwpz00 \
	I0729 16:47:09.254712    4389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 \
	I0729 16:47:09.254723    4389 kubeadm.go:310] 	--control-plane 
	I0729 16:47:09.254726    4389 kubeadm.go:310] 
	I0729 16:47:09.254767    4389 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:47:09.254777    4389 kubeadm.go:310] 
	I0729 16:47:09.254825    4389 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f3lwuj.pt0shg6ftprwpz00 \
	I0729 16:47:09.254872    4389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 
	I0729 16:47:09.254938    4389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:47:09.254947    4389 cni.go:84] Creating CNI manager for ""
	I0729 16:47:09.254956    4389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:47:09.259190    4389 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:47:09.267225    4389 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:47:09.273305    4389 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:47:09.279422    4389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:47:09.279486    4389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:47:09.279504    4389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-980000 minikube.k8s.io/updated_at=2024_07_29T16_47_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3 minikube.k8s.io/name=running-upgrade-980000 minikube.k8s.io/primary=true
	I0729 16:47:09.317133    4389 ops.go:34] apiserver oom_adj: -16
	I0729 16:47:09.318018    4389 kubeadm.go:1113] duration metric: took 38.586792ms to wait for elevateKubeSystemPrivileges
	I0729 16:47:09.318028    4389 kubeadm.go:394] duration metric: took 4m10.580784083s to StartCluster
	I0729 16:47:09.318038    4389 settings.go:142] acquiring lock: {Name:mk3b097bc26d2850dd7467a616788f5486d088c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:09.318127    4389 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:47:09.318545    4389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:09.318758    4389 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:47:09.318812    4389 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:47:09.318849    4389 config.go:182] Loaded profile config "running-upgrade-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:47:09.318853    4389 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-980000"
	I0729 16:47:09.318868    4389 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-980000"
	W0729 16:47:09.318871    4389 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:47:09.318884    4389 host.go:66] Checking if "running-upgrade-980000" exists ...
	I0729 16:47:09.318875    4389 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-980000"
	I0729 16:47:09.318913    4389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-980000"
	I0729 16:47:09.319776    4389 kapi.go:59] client config for running-upgrade-980000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/running-upgrade-980000/client.key", CAFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ef4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:47:09.319894    4389 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-980000"
	W0729 16:47:09.319899    4389 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:47:09.319906    4389 host.go:66] Checking if "running-upgrade-980000" exists ...
	I0729 16:47:09.323189    4389 out.go:177] * Verifying Kubernetes components...
	I0729 16:47:09.323544    4389 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:47:09.326251    4389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:47:09.326258    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	I0729 16:47:09.329128    4389 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:47:09.333203    4389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:47:09.337167    4389 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:47:09.337173    4389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:47:09.337179    4389 sshutil.go:53] new ssh client: &{IP:localhost Port:50270 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/running-upgrade-980000/id_rsa Username:docker}
	I0729 16:47:09.411124    4389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:47:09.415897    4389 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:47:09.415936    4389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:47:09.419610    4389 api_server.go:72] duration metric: took 100.841375ms to wait for apiserver process to appear ...
	I0729 16:47:09.419617    4389 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:47:09.419623    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:09.458073    4389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:47:09.475117    4389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:47:05.996622    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:05.996660    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:14.421664    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:14.421707    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:10.997857    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:10.997890    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:19.421933    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:19.421976    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:15.999476    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:15.999520    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:24.422258    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:24.422310    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:21.001522    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:21.001545    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:29.422711    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:29.422752    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:26.003672    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:26.003713    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:34.423322    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:34.423374    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:31.005909    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:31.006039    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:31.021018    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:31.021104    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:31.032798    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:31.032873    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:31.045913    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:31.045984    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:31.056560    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:31.056648    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:31.067379    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:31.067454    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:31.078613    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:31.078681    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:31.089285    4568 logs.go:276] 0 containers: []
	W0729 16:47:31.089297    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:31.089359    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:31.099563    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:31.099585    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:31.099590    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:31.112987    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:31.112998    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:31.125018    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:31.125030    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:31.129568    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:31.129580    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:31.233261    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:31.233275    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:31.249195    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:31.249208    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:31.264804    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:31.264816    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:31.276916    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:31.276928    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:31.288670    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:31.288681    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:31.300350    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:31.300359    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:31.311549    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:31.311559    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:31.350673    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:31.350686    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:31.365457    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:31.365471    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:31.383998    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:31.384009    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:31.431042    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:31.431053    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:31.444803    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:31.444813    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:33.971281    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:39.424033    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:39.424060    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:47:39.794562    4389 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:47:39.798767    4389 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:47:39.806688    4389 addons.go:510] duration metric: took 30.488316416s for enable addons: enabled=[storage-provisioner]
	I0729 16:47:38.973524    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:38.973713    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:38.991404    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:38.991490    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:39.005694    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:39.005767    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:39.015937    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:39.016009    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:39.026083    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:39.026153    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:39.036983    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:39.037058    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:39.047915    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:39.047984    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:39.058373    4568 logs.go:276] 0 containers: []
	W0729 16:47:39.058384    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:39.058439    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:39.069115    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:39.069132    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:39.069138    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:39.081058    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:39.081069    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:39.093466    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:39.093476    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:39.118447    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:39.118456    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:39.122478    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:39.122484    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:39.139008    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:39.139018    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:39.154485    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:39.154501    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:39.169701    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:39.169712    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:39.208022    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:39.208033    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:39.221982    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:39.221993    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:39.237214    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:39.237225    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:39.249128    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:39.249139    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:39.287718    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:39.287732    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:39.299448    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:39.299459    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:39.336295    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:39.336304    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:39.348085    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:39.348095    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:44.425009    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:44.425064    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:41.867113    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:49.426300    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:49.426340    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:46.869419    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:46.869524    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:46.880771    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:46.880846    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:46.891117    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:46.891209    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:46.901802    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:46.901864    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:46.912259    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:46.912337    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:46.922317    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:46.922384    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:46.932584    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:46.932645    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:46.942390    4568 logs.go:276] 0 containers: []
	W0729 16:47:46.942401    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:46.942452    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:46.952933    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:46.952950    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:46.952957    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:46.964284    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:46.964297    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:46.979465    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:46.979477    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:47.016390    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:47.016404    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:47.054079    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:47.054094    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:47.069072    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:47.069083    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:47.080831    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:47.080844    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:47.093624    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:47.093637    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:47.105268    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:47.105280    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:47.130932    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:47.130944    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:47.145127    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:47.145137    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:47.167738    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:47.167749    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:47.186962    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:47.186973    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:47.204387    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:47.204398    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:47.216635    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:47.216647    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:47.255238    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:47.255249    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:49.761405    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:54.427900    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:54.427942    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:54.763606    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:54.763800    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:54.790681    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:54.790766    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:54.803855    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:54.803933    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:54.814603    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:54.814674    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:54.824834    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:54.824909    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:54.836218    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:54.836290    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:54.846550    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:54.846620    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:54.856199    4568 logs.go:276] 0 containers: []
	W0729 16:47:54.856209    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:54.856281    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:54.874845    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:54.874860    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:54.874867    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:54.886551    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:54.886565    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:54.901595    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:54.901605    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:54.913078    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:54.913092    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:54.938810    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:54.938821    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:54.978318    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:54.978329    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:55.014914    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:55.014927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:55.028522    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:55.028536    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:55.047472    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:55.047488    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:55.059008    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:55.059017    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:55.070612    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:55.070623    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:55.105397    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:55.105409    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:55.120098    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:55.120108    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:55.141016    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:55.141028    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:55.159252    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:55.159262    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:55.171999    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:55.172012    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:59.429921    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:59.429942    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:57.677910    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:04.430214    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:04.430245    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:02.680226    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:02.680409    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:02.706596    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:02.706703    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:02.722054    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:02.722134    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:02.734554    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:02.734616    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:02.746078    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:02.746153    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:02.756834    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:02.756901    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:02.767837    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:02.767907    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:02.777904    4568 logs.go:276] 0 containers: []
	W0729 16:48:02.777916    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:02.777969    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:02.788648    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:02.788666    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:02.788671    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:02.807589    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:02.807603    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:02.825333    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:02.825348    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:02.838045    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:02.838056    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:02.877613    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:02.877621    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:02.891550    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:02.891561    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:02.929040    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:02.929052    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:02.949399    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:02.949411    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:02.954063    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:02.954070    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:02.968358    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:02.968368    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:02.979837    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:02.979848    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:02.991440    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:02.991450    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:03.026805    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:03.026816    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:03.042208    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:03.042222    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:03.053705    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:03.053718    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:03.068365    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:03.068379    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:05.594758    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:09.432398    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:09.432527    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:09.443338    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:09.443407    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:09.454023    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:09.454098    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:09.464713    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:09.464783    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:09.474919    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:09.474988    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:09.485239    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:09.485314    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:09.495642    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:09.495721    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:09.506177    4389 logs.go:276] 0 containers: []
	W0729 16:48:09.506188    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:09.506251    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:09.516551    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:09.516566    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:09.516572    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:09.541573    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:09.541586    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:09.553411    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:09.553424    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:09.558262    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:09.558271    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:09.569908    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:09.569922    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:09.583989    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:09.583999    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:09.598068    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:09.598079    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:09.613321    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:09.613332    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:09.625330    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:09.625340    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:09.643359    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:09.643369    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:09.654749    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:09.654760    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:09.689169    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:09.689181    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:09.723751    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:09.723762    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:10.596927    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:10.597025    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:10.608240    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:10.608320    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:10.619494    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:10.619587    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:10.631475    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:10.631545    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:10.642518    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:10.642586    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:10.652986    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:10.653053    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:10.663374    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:10.663446    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:10.673489    4568 logs.go:276] 0 containers: []
	W0729 16:48:10.673501    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:10.673554    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:10.688706    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:10.688723    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:10.688729    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:10.702902    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:10.702920    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:10.737727    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:10.737737    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:10.776559    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:10.776572    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:10.788912    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:10.788927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:12.240271    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:10.808563    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:10.808574    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:10.820914    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:10.820925    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:10.845169    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:10.845179    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:10.886536    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:10.886552    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:10.898217    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:10.898231    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:10.916791    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:10.916805    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:10.921025    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:10.921031    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:10.934607    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:10.934620    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:10.946900    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:10.946914    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:10.960134    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:10.960144    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:10.974467    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:10.974477    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:13.487722    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:17.242604    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:17.242824    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:17.258565    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:17.258655    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:17.271060    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:17.271132    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:17.281972    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:17.282051    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:17.293593    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:17.293659    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:17.305321    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:17.305390    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:17.315751    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:17.315816    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:17.325965    4389 logs.go:276] 0 containers: []
	W0729 16:48:17.325975    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:17.326029    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:17.336699    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:17.336713    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:17.336718    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:17.348183    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:17.348194    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:17.366871    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:17.366881    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:17.379230    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:17.379241    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:17.396831    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:17.396844    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:17.408115    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:17.408126    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:17.412480    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:17.412486    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:17.426186    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:17.426196    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:17.440660    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:17.440671    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:17.452057    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:17.452067    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:17.476441    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:17.476449    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:17.488053    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:17.488067    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:17.520917    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:17.520924    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:18.489218    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:18.489469    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:18.515748    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:18.515845    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:18.530971    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:18.531061    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:18.543504    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:18.543573    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:18.554327    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:18.554409    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:18.564992    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:18.565064    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:18.575744    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:18.575810    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:18.585476    4568 logs.go:276] 0 containers: []
	W0729 16:48:18.585487    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:18.585547    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:18.598406    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:18.598431    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:18.598437    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:18.611252    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:18.611267    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:18.650275    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:18.650287    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:18.654457    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:18.654466    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:18.668262    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:18.668275    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:18.679263    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:18.679275    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:18.691146    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:18.691161    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:18.718451    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:18.718464    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:18.754674    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:18.754689    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:18.792360    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:18.792370    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:18.806624    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:18.806632    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:18.824118    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:18.824133    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:18.842328    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:18.842343    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:18.856623    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:18.856638    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:18.872246    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:18.872256    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:18.885146    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:18.885156    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:20.058707    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:21.399965    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:25.060882    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:25.061089    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:25.083878    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:25.083966    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:25.096128    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:25.096195    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:25.107460    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:25.107537    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:25.118089    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:25.118152    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:25.132427    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:25.132503    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:25.142855    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:25.142923    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:25.153429    4389 logs.go:276] 0 containers: []
	W0729 16:48:25.153441    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:25.153502    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:25.165014    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:25.165027    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:25.165032    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:25.199825    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:25.199833    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:25.238928    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:25.238940    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:25.250980    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:25.250992    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:25.262710    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:25.262722    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:25.287477    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:25.287485    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:25.298765    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:25.298780    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:25.303239    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:25.303246    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:25.317200    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:25.317213    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:25.330909    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:25.330922    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:25.342326    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:25.342339    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:25.356648    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:25.356663    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:25.373806    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:25.373820    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:27.887225    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:26.402342    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:26.402519    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:26.416036    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:26.416122    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:26.427516    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:26.427584    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:26.438166    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:26.438240    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:26.448707    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:26.448778    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:26.459143    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:26.459211    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:26.469643    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:26.469704    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:26.479768    4568 logs.go:276] 0 containers: []
	W0729 16:48:26.479782    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:26.479842    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:26.490319    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:26.490339    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:26.490345    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:26.531731    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:26.531741    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:26.545584    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:26.545595    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:26.557059    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:26.557070    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:26.561272    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:26.561279    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:26.598045    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:26.598060    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:26.612865    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:26.612876    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:26.627784    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:26.627800    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:26.646558    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:26.646574    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:26.670202    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:26.670211    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:26.684068    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:26.684078    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:26.695759    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:26.695772    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:26.709911    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:26.709922    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:26.747132    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:26.747142    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:26.759369    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:26.759379    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:26.771063    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:26.771074    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:29.288333    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:32.888894    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:32.889033    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:32.900078    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:32.900155    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:32.910469    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:32.910534    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:32.921134    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:32.921207    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:32.931500    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:32.931568    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:32.949740    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:32.949814    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:32.960521    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:32.960590    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:32.971457    4389 logs.go:276] 0 containers: []
	W0729 16:48:32.971467    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:32.971523    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:32.981868    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:32.981884    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:32.981890    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:32.986923    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:32.986930    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:33.021808    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:33.021830    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:33.036519    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:33.036530    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:33.052567    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:33.052578    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:33.063901    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:33.063910    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:33.088367    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:33.088375    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:33.123378    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:33.123386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:33.142572    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:33.142584    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:33.157113    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:33.157124    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:33.169044    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:33.169055    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:33.186870    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:33.186888    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:33.198712    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:33.198723    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:34.290472    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:34.290619    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:34.307834    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:34.307916    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:34.318375    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:34.318446    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:34.328562    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:34.328635    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:34.339395    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:34.339470    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:34.350145    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:34.350220    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:34.360916    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:34.360986    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:34.371213    4568 logs.go:276] 0 containers: []
	W0729 16:48:34.371227    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:34.371288    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:34.383344    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:34.383361    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:34.383367    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:34.395192    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:34.395206    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:34.406193    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:34.406205    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:34.417200    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:34.417214    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:34.421405    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:34.421412    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:34.444211    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:34.444219    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:34.481150    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:34.481157    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:34.493647    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:34.493661    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:34.505271    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:34.505281    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:34.531028    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:34.531038    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:34.543272    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:34.543286    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:34.582128    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:34.582138    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:34.596027    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:34.596045    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:34.614495    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:34.614506    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:34.629080    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:34.629095    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:34.648083    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:34.648097    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:35.711983    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:37.191332    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:40.714120    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:40.714310    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:40.730318    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:40.730413    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:40.747526    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:40.747601    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:40.757987    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:40.758060    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:40.768486    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:40.768558    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:40.779066    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:40.779140    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:40.789625    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:40.789699    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:40.801825    4389 logs.go:276] 0 containers: []
	W0729 16:48:40.801836    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:40.801914    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:40.812281    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:40.812295    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:40.812300    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:40.826089    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:40.826098    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:40.837521    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:40.837536    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:40.853015    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:40.853026    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:40.870303    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:40.870312    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:40.885816    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:40.885827    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:40.923220    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:40.923229    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:40.937814    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:40.937824    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:40.949838    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:40.949852    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:40.964720    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:40.964729    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:40.976184    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:40.976195    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:41.001516    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:41.001535    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:41.036245    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:41.036257    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:43.543431    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:42.193624    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:42.193856    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:42.210461    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:42.210546    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:42.222690    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:42.222765    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:42.233075    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:42.233140    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:42.243412    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:42.243488    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:42.254498    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:42.254571    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:42.265464    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:42.265526    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:42.275846    4568 logs.go:276] 0 containers: []
	W0729 16:48:42.275856    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:42.275909    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:42.286248    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:42.286268    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:42.286273    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:42.328450    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:42.328461    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:42.342876    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:42.342890    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:42.354202    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:42.354215    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:42.365709    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:42.365720    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:42.379917    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:42.379927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:42.396825    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:42.396835    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:42.408764    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:42.408776    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:42.447818    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:42.447829    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:42.483094    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:42.483107    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:42.498393    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:42.498406    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:42.509656    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:42.509669    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:42.524355    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:42.524365    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:42.528750    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:42.528758    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:42.540447    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:42.540458    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:42.564972    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:42.564979    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:45.078218    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:48.543911    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:48.544161    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:48.564992    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:48.565091    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:48.582434    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:48.582502    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:48.594511    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:48.594590    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:48.605949    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:48.606012    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:48.617818    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:48.617882    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:48.629819    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:48.629882    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:48.640592    4389 logs.go:276] 0 containers: []
	W0729 16:48:48.640603    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:48.640654    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:48.651026    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:48.651042    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:48.651047    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:48.665457    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:48.665467    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:48.679066    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:48.679076    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:48.690557    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:48.690570    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:48.702563    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:48.702578    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:48.724547    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:48.724559    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:48.737686    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:48.737696    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:48.742387    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:48.742394    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:48.777871    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:48.777882    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:48.801827    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:48.801839    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:48.816287    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:48.816297    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:48.828265    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:48.828275    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:48.862628    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:48.862645    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:50.080517    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:50.080742    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:50.103688    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:50.103782    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:50.119573    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:50.119651    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:50.130818    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:50.130892    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:50.141621    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:50.141699    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:50.152443    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:50.152511    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:50.163124    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:50.163195    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:50.181637    4568 logs.go:276] 0 containers: []
	W0729 16:48:50.181649    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:50.181706    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:50.192102    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:50.192120    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:50.192125    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:50.203575    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:50.203587    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:50.219784    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:50.219795    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:50.237968    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:50.237979    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:50.263406    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:50.263415    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:50.303480    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:50.303496    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:50.307976    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:50.307984    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:50.349048    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:50.349059    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:50.363600    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:50.363612    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:50.374838    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:50.374851    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:50.412361    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:50.412377    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:50.426821    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:50.426834    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:50.442518    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:50.442530    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:50.456859    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:50.456875    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:50.470609    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:50.470620    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:50.483986    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:50.483997    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:51.376858    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:53.000284    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:56.379067    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:56.379229    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:56.393648    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:48:56.393729    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:56.404527    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:48:56.404601    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:56.414948    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:48:56.415020    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:56.425322    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:48:56.425384    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:56.435616    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:48:56.435693    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:56.445994    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:48:56.446078    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:56.456916    4389 logs.go:276] 0 containers: []
	W0729 16:48:56.456927    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:56.456981    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:56.467505    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:48:56.467519    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:56.467524    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:56.502742    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:48:56.502755    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:48:56.514185    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:48:56.514199    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:48:56.528941    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:48:56.528951    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:48:56.540359    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:48:56.540372    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:48:56.557435    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:48:56.557446    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:56.568958    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:56.568970    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:56.604898    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:56.604910    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:56.609716    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:48:56.609724    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:48:56.624200    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:48:56.624212    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:48:56.638326    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:48:56.638338    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:48:56.650281    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:48:56.650295    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:48:56.662550    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:56.662561    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:59.187637    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:58.002531    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:58.002680    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:58.021314    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:58.021391    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:58.032569    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:58.032637    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:58.042938    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:58.043000    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:58.053777    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:58.053855    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:58.064976    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:58.065044    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:58.080819    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:58.080899    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:58.092215    4568 logs.go:276] 0 containers: []
	W0729 16:48:58.092228    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:58.092294    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:58.103059    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:58.103079    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:58.103084    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:58.114566    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:58.114578    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:58.139125    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:58.139132    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:58.150914    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:58.150927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:58.166742    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:58.166752    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:58.204423    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:58.204432    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:58.208442    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:58.208448    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:58.243555    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:58.243568    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:58.257462    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:58.257473    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:58.272135    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:58.272149    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:58.283068    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:58.283080    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:58.298143    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:58.298155    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:58.337032    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:58.337046    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:58.350984    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:58.350997    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:58.362565    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:58.362578    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:58.380258    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:58.380272    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:04.189908    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:04.190188    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:04.213225    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:04.213350    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:04.229599    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:04.229704    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:04.242540    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:49:04.242615    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:04.255530    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:04.255603    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:04.265782    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:04.265848    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:04.276273    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:04.276348    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:04.288198    4389 logs.go:276] 0 containers: []
	W0729 16:49:04.288212    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:04.288272    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:04.298482    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:04.298495    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:04.298501    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:04.310562    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:04.310573    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:04.322751    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:04.322761    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:04.341766    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:04.341777    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:04.360023    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:04.360034    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:04.395781    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:04.395790    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:04.399973    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:04.399983    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:04.414465    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:04.414476    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:04.428316    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:04.428325    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:04.453878    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:04.453886    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:04.465567    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:04.465577    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:04.499212    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:04.499223    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:04.513867    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:04.513876    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:00.894747    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:07.027768    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:05.895159    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:05.895303    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:05.909202    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:05.909270    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:05.919318    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:05.919394    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:05.930127    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:05.930197    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:05.945988    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:05.946068    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:05.956631    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:05.956701    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:05.967352    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:05.967431    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:05.977828    4568 logs.go:276] 0 containers: []
	W0729 16:49:05.977841    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:05.977897    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:05.988827    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:05.988843    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:05.988848    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:06.001505    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:06.001517    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:06.019190    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:06.019201    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:06.031266    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:06.031276    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:06.036012    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:06.036020    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:06.069450    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:06.069460    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:06.092306    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:06.092315    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:06.106246    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:06.106257    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:06.117842    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:06.117854    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:06.130380    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:06.130391    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:06.159129    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:06.159141    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:06.197388    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:06.197399    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:06.236325    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:06.236337    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:06.249251    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:06.249261    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:06.263726    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:06.263739    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:06.285027    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:06.285038    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:08.798550    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:12.030248    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:12.030435    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:12.050963    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:12.051080    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:12.066252    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:12.066322    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:12.078830    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:49:12.078906    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:12.089592    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:12.089664    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:12.100863    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:12.100931    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:12.111798    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:12.111891    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:12.130122    4389 logs.go:276] 0 containers: []
	W0729 16:49:12.130134    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:12.130197    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:12.140701    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:12.140716    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:12.140722    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:12.174403    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:12.174412    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:12.190999    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:12.191010    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:12.211376    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:12.211386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:12.228685    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:12.228696    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:12.240332    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:12.240341    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:12.255262    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:12.255275    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:12.266904    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:12.266913    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:12.291879    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:12.291889    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:12.296725    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:12.296735    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:12.333076    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:12.333087    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:12.348523    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:12.348533    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:12.362197    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:12.362208    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:13.800839    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:13.801060    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:13.814729    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:13.814805    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:13.827819    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:13.827891    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:13.838069    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:13.838143    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:13.848843    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:13.848919    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:13.858795    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:13.858863    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:13.869409    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:13.869483    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:13.879338    4568 logs.go:276] 0 containers: []
	W0729 16:49:13.879352    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:13.879410    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:13.889785    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:13.889806    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:13.889812    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:13.927097    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:13.927107    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:13.938712    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:13.938724    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:13.953511    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:13.953522    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:13.964673    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:13.964684    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:14.000893    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:14.000903    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:14.004649    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:14.004658    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:14.018103    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:14.018113    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:14.029971    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:14.029982    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:14.065953    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:14.065968    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:14.080332    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:14.080350    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:14.096223    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:14.096234    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:14.113971    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:14.114011    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:14.136834    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:14.136845    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:14.150495    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:14.150510    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:14.164701    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:14.164714    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:14.875512    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:16.689206    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:19.877615    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:19.877753    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:19.889359    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:19.889438    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:19.899784    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:19.899892    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:19.917899    4389 logs.go:276] 2 containers: [4b491e173233 af28ca5a05f8]
	I0729 16:49:19.917962    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:19.928262    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:19.928321    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:19.939138    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:19.939213    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:19.949274    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:19.949338    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:19.959450    4389 logs.go:276] 0 containers: []
	W0729 16:49:19.959458    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:19.959518    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:19.970074    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:19.970086    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:19.970091    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:19.981336    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:19.981345    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:20.004182    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:20.004194    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:20.008930    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:20.008939    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:20.042997    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:20.043011    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:20.057136    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:20.057149    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:20.068986    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:20.069000    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:20.080360    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:20.080369    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:20.097814    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:20.097824    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:20.109410    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:20.109419    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:20.144759    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:20.144771    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:20.159432    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:20.159441    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:20.174251    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:20.174264    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:22.687705    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:21.691526    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:21.691872    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:21.718480    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:21.718627    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:21.735942    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:21.736022    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:21.755273    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:21.755349    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:21.766031    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:21.766105    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:21.776786    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:21.776859    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:21.788230    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:21.788304    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:21.798999    4568 logs.go:276] 0 containers: []
	W0729 16:49:21.799015    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:21.799073    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:21.813134    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:21.813152    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:21.813157    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:21.827124    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:21.827135    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:21.844507    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:21.844517    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:21.868469    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:21.868491    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:21.881779    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:21.881791    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:21.897287    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:21.897302    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:21.931661    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:21.931671    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:21.969300    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:21.969311    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:21.984020    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:21.984034    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:21.995653    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:21.995663    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:22.007385    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:22.007395    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:22.019519    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:22.019529    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:22.056280    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:22.056292    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:22.060118    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:22.060127    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:22.071658    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:22.071668    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:22.085434    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:22.085445    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:24.602104    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:27.689784    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:27.690145    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:27.714140    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:27.714234    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:27.730598    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:27.730678    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:27.743810    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:27.743880    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:27.758837    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:27.758910    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:27.770394    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:27.770453    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:27.780835    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:27.780900    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:27.790865    4389 logs.go:276] 0 containers: []
	W0729 16:49:27.790877    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:27.790936    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:27.801580    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:27.801596    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:27.801602    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:27.815508    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:27.815521    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:27.828523    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:27.828534    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:27.845944    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:27.845953    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:27.861866    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:27.861876    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:27.876508    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:27.876518    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:27.888114    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:27.888123    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:27.913433    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:27.913440    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:27.924559    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:27.924570    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:27.938587    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:27.938597    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:27.974546    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:27.974556    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:27.993919    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:27.993929    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:28.005643    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:28.005653    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:28.017320    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:28.017330    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:28.050905    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:28.050914    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:29.604355    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:29.604477    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:29.624494    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:29.624568    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:29.643806    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:29.643880    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:29.654230    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:29.654304    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:29.665069    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:29.665145    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:29.675593    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:29.675660    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:29.686512    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:29.686579    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:29.700357    4568 logs.go:276] 0 containers: []
	W0729 16:49:29.700374    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:29.700439    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:29.711373    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:29.711390    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:29.711396    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:29.745289    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:29.745303    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:29.760069    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:29.760084    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:29.777342    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:29.777353    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:29.801137    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:29.801144    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:29.840234    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:29.840247    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:29.851864    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:29.851879    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:29.863503    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:29.863514    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:29.875395    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:29.875406    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:29.879584    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:29.879593    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:29.917984    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:29.917994    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:29.932799    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:29.932812    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:29.945307    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:29.945319    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:29.959291    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:29.959304    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:29.973936    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:29.973948    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:29.985790    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:29.985802    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:30.557938    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:32.499062    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:35.560091    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:35.560360    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:35.583704    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:35.583820    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:35.599828    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:35.599902    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:35.614976    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:35.615058    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:35.626455    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:35.626530    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:35.636792    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:35.636864    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:35.647297    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:35.647364    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:35.657369    4389 logs.go:276] 0 containers: []
	W0729 16:49:35.657380    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:35.657438    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:35.666998    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:35.667013    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:35.667019    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:35.678477    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:35.678487    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:35.695904    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:35.695916    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:35.707833    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:35.707844    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:35.719012    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:35.719024    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:35.732698    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:35.732707    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:35.744542    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:35.744555    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:35.768856    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:35.768866    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:35.783035    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:35.783045    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:35.788171    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:35.788178    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:35.828622    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:35.828635    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:35.842145    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:35.842156    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:35.855211    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:35.855223    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:35.869696    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:35.869706    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:35.881625    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:35.881634    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:38.417433    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:37.501267    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:37.501450    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:37.523612    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:37.523693    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:37.535621    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:37.535684    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:37.546623    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:37.546705    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:37.557273    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:37.557345    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:37.567509    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:37.567569    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:37.579438    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:37.579504    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:37.589675    4568 logs.go:276] 0 containers: []
	W0729 16:49:37.589693    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:37.589749    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:37.600139    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:37.600159    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:37.600165    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:37.604486    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:37.604495    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:37.616416    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:37.616427    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:37.627878    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:37.627891    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:37.645499    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:37.645509    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:37.657716    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:37.657729    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:37.681940    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:37.681948    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:37.706627    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:37.706638    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:37.721676    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:37.721688    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:37.733485    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:37.733497    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:37.773348    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:37.773357    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:37.787225    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:37.787237    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:37.825311    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:37.825324    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:37.839888    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:37.839901    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:37.851156    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:37.851169    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:37.886575    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:37.886587    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:40.400990    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:43.419540    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:43.419681    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:43.431380    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:43.431453    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:43.442273    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:43.442346    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:43.452755    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:43.452827    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:43.463333    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:43.463413    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:43.473595    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:43.473664    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:43.483672    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:43.483742    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:43.493898    4389 logs.go:276] 0 containers: []
	W0729 16:49:43.493909    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:43.493969    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:43.504179    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:43.504197    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:43.504202    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:43.516293    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:43.516303    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:43.541380    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:43.541388    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:43.577150    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:43.577160    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:43.593325    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:43.593334    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:43.607423    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:43.607439    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:43.612028    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:43.612035    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:43.626088    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:43.626099    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:43.637856    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:43.637867    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:43.649713    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:43.649724    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:43.661278    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:43.661289    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:43.676615    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:43.676626    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:43.700257    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:43.700268    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:43.712447    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:43.712461    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:43.746029    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:43.746042    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:45.403306    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:45.403446    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:45.423667    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:45.423750    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:45.434627    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:45.434697    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:45.445549    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:45.445620    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:45.456005    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:45.456070    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:45.466449    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:45.466521    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:45.477178    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:45.477245    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:45.487502    4568 logs.go:276] 0 containers: []
	W0729 16:49:45.487514    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:45.487572    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:45.498208    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:45.498225    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:45.498231    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:45.512023    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:45.512033    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:45.552172    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:45.552185    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:45.563813    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:45.563828    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:45.578324    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:45.578336    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:45.591072    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:45.591083    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:45.602119    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:45.602132    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:45.617271    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:45.617284    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:45.629216    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:45.629228    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:45.647122    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:45.647133    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:45.658382    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:45.658393    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:45.698074    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:45.698085    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:45.702508    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:45.702515    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:45.735984    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:45.735995    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:45.750456    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:45.750471    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:45.773655    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:45.773670    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:46.263064    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:48.287667    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:51.265272    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:51.265430    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:51.279185    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:51.279266    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:51.295061    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:51.295131    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:51.305897    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:51.305972    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:51.319998    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:51.320073    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:51.331209    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:51.331275    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:51.341859    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:51.341927    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:51.353140    4389 logs.go:276] 0 containers: []
	W0729 16:49:51.353152    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:51.353210    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:51.364220    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:51.364238    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:51.364243    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:51.399400    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:51.399411    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:51.413488    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:51.413498    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:51.428632    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:51.428644    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:51.440827    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:51.440838    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:51.452179    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:51.452191    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:51.464045    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:51.464056    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:51.475580    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:51.475594    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:51.491560    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:51.491570    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:51.524711    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:51.524718    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:51.528834    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:51.528843    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:51.540506    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:51.540517    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:51.558144    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:51.558154    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:51.570402    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:51.570412    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:51.595839    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:51.595846    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:54.109770    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:53.289361    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:53.289548    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:53.305635    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:53.305714    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:53.317797    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:53.317860    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:53.328877    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:53.328948    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:53.339387    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:53.339464    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:53.350420    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:53.350492    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:53.365153    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:53.365232    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:53.384195    4568 logs.go:276] 0 containers: []
	W0729 16:49:53.384210    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:53.384274    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:53.409779    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:53.409799    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:53.409805    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:53.444664    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:53.444679    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:53.463611    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:53.463623    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:53.477854    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:53.477864    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:53.489803    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:53.489815    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:53.501588    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:53.501598    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:53.525318    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:53.525325    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:53.563182    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:53.563195    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:53.577798    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:53.577808    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:53.595108    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:53.595120    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:53.607428    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:53.607442    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:53.619792    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:53.619806    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:53.624116    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:53.624123    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:53.663064    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:53.663071    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:53.677468    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:53.677480    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:53.689548    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:53.689562    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:59.112043    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:59.112234    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:59.129740    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:49:59.129830    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:59.146396    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:49:59.146464    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:59.157057    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:49:59.157127    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:59.168657    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:49:59.168733    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:59.179248    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:49:59.179316    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:59.189514    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:49:59.189581    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:59.200845    4389 logs.go:276] 0 containers: []
	W0729 16:49:59.200857    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:59.200918    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:59.211005    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:49:59.211024    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:59.211029    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:59.215645    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:49:59.215654    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:49:59.229549    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:49:59.229562    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:49:59.241330    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:49:59.241340    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:59.253380    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:49:59.253391    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:49:59.272252    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:59.272263    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:59.308733    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:59.308752    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:59.343404    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:49:59.343415    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:49:59.358258    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:49:59.358268    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:49:59.378882    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:49:59.378892    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:49:59.391074    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:59.391088    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:59.415957    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:49:59.415964    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:49:59.427406    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:49:59.427419    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:49:59.438932    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:49:59.438944    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:49:59.452795    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:49:59.452805    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:49:56.205062    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:01.966221    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:01.207390    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:01.207551    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:01.222782    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:01.222865    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:01.234631    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:01.234700    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:01.245503    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:01.245574    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:01.255885    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:01.255961    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:01.266645    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:01.266717    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:01.277140    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:01.277203    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:01.287897    4568 logs.go:276] 0 containers: []
	W0729 16:50:01.287908    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:01.287961    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:01.298374    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:01.298396    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:01.298402    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:01.337041    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:01.337054    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:01.348412    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:01.348425    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:01.360704    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:01.360717    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:01.375948    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:01.375960    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:01.398932    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:01.398955    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:01.412697    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:01.412706    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:01.427266    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:01.427276    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:01.464471    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:01.464483    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:01.468838    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:01.468847    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:01.482552    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:01.482563    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:01.494762    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:01.494775    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:01.529079    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:01.529094    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:01.541313    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:01.541324    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:01.556117    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:01.556129    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:01.573075    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:01.573086    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:04.087850    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:06.966583    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:06.966746    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:06.984093    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:06.984187    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:06.998600    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:06.998667    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:07.009633    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:07.009705    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:07.019874    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:07.019940    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:07.033136    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:07.033211    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:07.044397    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:07.044468    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:07.055280    4389 logs.go:276] 0 containers: []
	W0729 16:50:07.055291    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:07.055349    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:07.065577    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:07.065595    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:07.065600    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:07.077373    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:07.077386    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:07.089836    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:07.089850    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:07.108973    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:07.108985    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:07.150860    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:07.150872    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:07.165237    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:07.165247    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:07.181013    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:07.181026    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:07.205295    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:07.205303    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:07.216948    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:07.216959    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:07.238788    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:07.238802    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:07.250917    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:07.250931    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:07.285894    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:07.285903    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:07.290659    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:07.290665    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:07.302713    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:07.302723    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:07.317116    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:07.317126    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:09.090240    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:09.090401    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:09.104204    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:09.104290    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:09.116859    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:09.116931    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:09.127702    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:09.127799    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:09.139198    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:09.139268    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:09.154501    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:09.154573    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:09.166160    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:09.166230    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:09.178602    4568 logs.go:276] 0 containers: []
	W0729 16:50:09.178612    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:09.178667    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:09.189460    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:09.189477    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:09.189483    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:09.211718    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:09.211734    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:09.250916    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:09.250931    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:09.267344    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:09.267354    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:09.278719    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:09.278731    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:09.293296    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:09.293306    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:09.333039    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:09.333054    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:09.337655    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:09.337662    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:09.373008    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:09.373022    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:09.384817    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:09.384829    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:09.397119    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:09.397130    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:09.408811    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:09.408822    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:09.420715    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:09.420728    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:09.435079    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:09.435089    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:09.449110    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:09.449121    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:09.460582    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:09.460592    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:09.835636    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:11.980479    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:14.837821    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:14.837987    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:14.849920    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:14.849997    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:14.860625    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:14.860690    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:14.871398    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:14.871471    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:14.881694    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:14.881764    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:14.892534    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:14.892596    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:14.903307    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:14.903379    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:14.912944    4389 logs.go:276] 0 containers: []
	W0729 16:50:14.912955    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:14.913013    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:14.923773    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:14.923790    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:14.923795    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:14.937795    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:14.937807    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:14.949336    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:14.949350    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:14.984604    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:14.984615    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:14.996510    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:14.996523    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:15.019951    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:15.019957    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:15.055586    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:15.055604    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:15.067590    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:15.067601    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:15.078758    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:15.078771    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:15.094042    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:15.094053    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:15.111428    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:15.111439    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:15.124897    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:15.124911    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:15.129640    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:15.129646    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:15.143653    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:15.143664    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:15.155622    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:15.155635    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:17.668959    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:16.982795    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:16.982931    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:17.000535    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:17.000616    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:17.014541    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:17.014614    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:17.025665    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:17.025726    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:17.036022    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:17.036091    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:17.046172    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:17.046231    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:17.057085    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:17.057152    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:17.069404    4568 logs.go:276] 0 containers: []
	W0729 16:50:17.069417    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:17.069472    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:17.080076    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:17.080101    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:17.080107    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:17.099141    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:17.099153    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:17.112909    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:17.112919    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:17.130933    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:17.130946    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:17.143230    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:17.143246    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:17.147202    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:17.147208    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:17.158244    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:17.158255    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:17.172543    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:17.172556    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:17.184937    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:17.184947    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:17.223864    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:17.223884    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:17.235881    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:17.235894    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:17.250842    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:17.250852    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:17.262422    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:17.262433    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:17.284201    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:17.284208    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:17.322232    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:17.322243    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:17.339111    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:17.339126    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:19.877432    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:22.671301    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:22.671557    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:22.697766    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:22.697861    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:22.715913    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:22.715991    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:22.729372    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:22.729434    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:22.740682    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:22.740747    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:22.751919    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:22.751987    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:22.762868    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:22.762928    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:22.772955    4389 logs.go:276] 0 containers: []
	W0729 16:50:22.772968    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:22.773020    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:22.783719    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:22.783736    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:22.783740    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:22.795313    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:22.795329    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:22.806846    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:22.806859    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:22.821727    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:22.821740    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:22.839489    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:22.839497    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:22.853940    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:22.853956    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:22.866706    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:22.866717    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:22.879691    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:22.879705    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:22.896965    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:22.896974    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:22.908967    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:22.908981    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:22.934043    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:22.934049    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:22.969539    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:22.969545    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:22.974310    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:22.974317    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:23.008238    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:23.008251    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:23.023006    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:23.023020    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:24.879616    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:24.879747    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:24.891290    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:24.891365    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:24.902535    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:24.902612    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:24.913145    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:24.913210    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:24.923471    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:24.923547    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:24.934143    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:24.934218    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:24.945244    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:24.945315    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:24.955777    4568 logs.go:276] 0 containers: []
	W0729 16:50:24.955787    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:24.955841    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:24.965763    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:24.965782    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:24.965788    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:24.977615    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:24.977625    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:24.995796    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:24.995807    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:25.007656    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:25.007669    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:25.030961    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:25.030968    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:25.067874    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:25.067882    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:25.081784    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:25.081793    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:25.096458    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:25.096469    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:25.115800    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:25.115814    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:25.129454    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:25.129467    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:25.145689    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:25.145700    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:25.157333    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:25.157343    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:25.161252    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:25.161260    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:25.196822    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:25.196834    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:25.240013    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:25.240026    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:25.254892    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:25.254904    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:25.538541    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:27.766572    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:32.768788    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:32.768865    4568 kubeadm.go:597] duration metric: took 4m3.464794292s to restartPrimaryControlPlane
	W0729 16:50:32.768936    4568 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:50:32.768968    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:50:33.775832    4568 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.006867375s)
	I0729 16:50:33.775911    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:50:33.781043    4568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:50:33.783938    4568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:50:33.786640    4568 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:50:33.786647    4568 kubeadm.go:157] found existing configuration files:
	
	I0729 16:50:33.786667    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf
	I0729 16:50:33.788940    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:50:33.788966    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:50:33.791956    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf
	I0729 16:50:33.795027    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:50:33.795049    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:50:33.797669    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf
	I0729 16:50:33.800190    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:50:33.800209    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:50:33.803250    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf
	I0729 16:50:33.805979    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:50:33.806001    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:50:33.808513    4568 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:50:33.826439    4568 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:50:33.826473    4568 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:50:33.875306    4568 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:50:33.875364    4568 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:50:33.875425    4568 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:50:33.923819    4568 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:50:33.928027    4568 out.go:204]   - Generating certificates and keys ...
	I0729 16:50:33.928060    4568 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:50:33.928098    4568 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:50:33.928147    4568 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:50:33.928177    4568 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:50:33.928213    4568 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:50:33.928243    4568 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:50:33.928272    4568 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:50:33.928318    4568 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:50:33.928359    4568 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:50:33.928394    4568 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:50:33.928416    4568 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:50:33.928447    4568 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:50:34.015302    4568 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:50:34.254757    4568 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:50:34.433829    4568 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:50:34.534558    4568 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:50:34.563186    4568 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:50:34.563594    4568 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:50:34.563685    4568 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:50:34.653558    4568 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:50:30.540153    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:30.540322    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:30.555662    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:30.555751    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:30.567645    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:30.567716    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:30.581176    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:30.581257    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:30.593667    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:30.593738    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:30.605413    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:30.605483    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:30.619775    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:30.619846    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:30.630120    4389 logs.go:276] 0 containers: []
	W0729 16:50:30.630132    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:30.630194    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:30.640620    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:30.640638    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:30.640645    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:30.676675    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:30.676686    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:30.690948    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:30.690960    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:30.702659    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:30.702671    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:30.718335    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:30.718345    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:30.732102    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:30.732114    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:30.767319    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:30.767328    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:30.771826    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:30.771832    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:30.783428    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:30.783439    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:30.795212    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:30.795223    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:30.806217    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:30.806229    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:30.817786    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:30.817798    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:30.843182    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:30.843192    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:30.857795    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:30.857806    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:30.875734    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:30.875745    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:33.389770    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:34.657839    4568 out.go:204]   - Booting up control plane ...
	I0729 16:50:34.657886    4568 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:50:34.657929    4568 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:50:34.657961    4568 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:50:34.658010    4568 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:50:34.658097    4568 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:50:38.391215    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:38.391712    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:38.433894    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:38.434040    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:38.455220    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:38.455327    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:38.471032    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:38.471116    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:38.483289    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:38.483360    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:38.494390    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:38.494466    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:38.505251    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:38.505326    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:38.516390    4389 logs.go:276] 0 containers: []
	W0729 16:50:38.516402    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:38.516459    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:38.527432    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:38.527452    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:38.527457    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:38.539465    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:38.539478    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:38.551055    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:38.551067    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:38.566205    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:38.566218    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:38.581451    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:38.581463    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:38.596358    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:38.596369    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:38.615606    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:38.615616    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:38.627516    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:38.627531    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:38.640215    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:38.640227    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:38.653163    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:38.653177    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:38.678226    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:38.678241    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:38.714536    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:38.714551    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:38.719206    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:38.719219    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:38.757424    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:38.757437    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:38.770123    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:38.770135    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:39.156729    4568 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502143 seconds
	I0729 16:50:39.156810    4568 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:50:39.161095    4568 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:50:39.688166    4568 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:50:39.688654    4568 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-480000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:50:40.193613    4568 kubeadm.go:310] [bootstrap-token] Using token: 4u8amr.zyu6m2bhslxi0hbj
	I0729 16:50:40.199547    4568 out.go:204]   - Configuring RBAC rules ...
	I0729 16:50:40.199609    4568 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:50:40.199657    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:50:40.201851    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:50:40.203178    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:50:40.204304    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:50:40.205171    4568 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:50:40.208541    4568 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:50:40.353800    4568 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:50:40.597726    4568 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:50:40.598141    4568 kubeadm.go:310] 
	I0729 16:50:40.598182    4568 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:50:40.598187    4568 kubeadm.go:310] 
	I0729 16:50:40.598308    4568 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:50:40.598313    4568 kubeadm.go:310] 
	I0729 16:50:40.598348    4568 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:50:40.598402    4568 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:50:40.598451    4568 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:50:40.598459    4568 kubeadm.go:310] 
	I0729 16:50:40.598499    4568 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:50:40.598508    4568 kubeadm.go:310] 
	I0729 16:50:40.598542    4568 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:50:40.598547    4568 kubeadm.go:310] 
	I0729 16:50:40.598598    4568 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:50:40.598657    4568 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:50:40.598697    4568 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:50:40.598700    4568 kubeadm.go:310] 
	I0729 16:50:40.598747    4568 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:50:40.598788    4568 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:50:40.598792    4568 kubeadm.go:310] 
	I0729 16:50:40.598836    4568 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4u8amr.zyu6m2bhslxi0hbj \
	I0729 16:50:40.598952    4568 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 \
	I0729 16:50:40.598974    4568 kubeadm.go:310] 	--control-plane 
	I0729 16:50:40.598978    4568 kubeadm.go:310] 
	I0729 16:50:40.599151    4568 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:50:40.599179    4568 kubeadm.go:310] 
	I0729 16:50:40.599229    4568 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4u8amr.zyu6m2bhslxi0hbj \
	I0729 16:50:40.599279    4568 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 
	I0729 16:50:40.599335    4568 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:50:40.599359    4568 cni.go:84] Creating CNI manager for ""
	I0729 16:50:40.599381    4568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:50:40.602469    4568 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:50:40.610417    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:50:40.613414    4568 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:50:40.617969    4568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:50:40.618011    4568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:50:40.618036    4568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-480000 minikube.k8s.io/updated_at=2024_07_29T16_50_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3 minikube.k8s.io/name=stopped-upgrade-480000 minikube.k8s.io/primary=true
	I0729 16:50:40.621198    4568 ops.go:34] apiserver oom_adj: -16
	I0729 16:50:40.656912    4568 kubeadm.go:1113] duration metric: took 38.935916ms to wait for elevateKubeSystemPrivileges
	I0729 16:50:40.656930    4568 kubeadm.go:394] duration metric: took 4m11.366545375s to StartCluster
	I0729 16:50:40.656940    4568 settings.go:142] acquiring lock: {Name:mk3b097bc26d2850dd7467a616788f5486d088c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:50:40.657024    4568 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:50:40.657463    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:50:40.657646    4568 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:50:40.657670    4568 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:50:40.657750    4568 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:50:40.657763    4568 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-480000"
	I0729 16:50:40.657750    4568 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-480000"
	I0729 16:50:40.657792    4568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-480000"
	I0729 16:50:40.657796    4568 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-480000"
	W0729 16:50:40.657799    4568 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:50:40.657810    4568 host.go:66] Checking if "stopped-upgrade-480000" exists ...
	I0729 16:50:40.659034    4568 kapi.go:59] client config for stopped-upgrade-480000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.key", CAFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ae0080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:50:40.659147    4568 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-480000"
	W0729 16:50:40.659154    4568 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:50:40.659161    4568 host.go:66] Checking if "stopped-upgrade-480000" exists ...
	I0729 16:50:40.661332    4568 out.go:177] * Verifying Kubernetes components...
	I0729 16:50:40.661640    4568 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:50:40.664548    4568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:50:40.664554    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:50:40.667314    4568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:50:40.671389    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:50:40.675264    4568 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:50:40.675271    4568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:50:40.675278    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:50:40.765144    4568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:50:40.770061    4568 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:50:40.770105    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:50:40.773747    4568 api_server.go:72] duration metric: took 116.091ms to wait for apiserver process to appear ...
	I0729 16:50:40.773756    4568 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:50:40.773764    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:40.785805    4568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:50:41.289671    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:40.852266    4568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:50:45.773927    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:45.773969    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:46.290838    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:46.291016    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:46.313719    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:46.313838    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:46.329157    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:46.329231    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:46.342125    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:46.342209    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:46.364383    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:46.364460    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:46.390478    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:46.390555    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:46.401062    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:46.401135    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:46.411014    4389 logs.go:276] 0 containers: []
	W0729 16:50:46.411024    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:46.411084    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:46.422005    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:46.422024    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:46.422029    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:46.438887    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:46.438899    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:46.454501    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:46.454511    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:46.479578    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:46.479588    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:46.491674    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:46.491686    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:46.527625    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:46.527638    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:46.541485    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:46.541496    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:46.555446    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:46.555456    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:46.566514    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:46.566527    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:46.581593    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:46.581608    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:46.586131    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:46.586139    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:46.622718    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:46.622732    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:46.634377    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:46.634387    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:46.646406    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:46.646416    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:46.657962    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:46.657972    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:49.181962    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:50.775693    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:50.775721    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:54.183509    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:54.183643    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:54.196697    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:50:54.196774    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:54.209149    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:50:54.209221    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:54.222411    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:50:54.222488    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:54.235370    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:50:54.235446    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:54.248646    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:50:54.248723    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:54.259995    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:50:54.260086    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:54.272520    4389 logs.go:276] 0 containers: []
	W0729 16:50:54.272534    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:54.272606    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:54.285074    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:50:54.285094    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:50:54.285100    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:50:54.298779    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:54.298791    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:54.334696    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:54.334709    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:54.371309    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:50:54.371321    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:50:54.383055    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:50:54.383066    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:50:54.394850    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:50:54.394865    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:50:54.406588    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:50:54.406602    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:50:54.424416    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:54.424427    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:54.447935    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:50:54.447943    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:50:54.459725    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:50:54.459735    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:50:54.478326    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:50:54.478337    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:54.490188    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:54.490203    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:54.494801    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:50:54.494809    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:50:54.510184    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:50:54.510197    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:50:54.524960    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:50:54.524974    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:50:55.775957    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:55.775995    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:57.039725    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:00.776416    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:00.776460    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:02.041324    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:02.041539    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:51:02.067308    4389 logs.go:276] 1 containers: [f3a95cd743ff]
	I0729 16:51:02.067429    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:51:02.082301    4389 logs.go:276] 1 containers: [28e386923c44]
	I0729 16:51:02.082387    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:51:02.095124    4389 logs.go:276] 4 containers: [8745002adc0d a1846a41c074 4b491e173233 af28ca5a05f8]
	I0729 16:51:02.095199    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:51:02.107727    4389 logs.go:276] 1 containers: [176fa8dbd10a]
	I0729 16:51:02.107795    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:51:02.118458    4389 logs.go:276] 1 containers: [f58cc34de629]
	I0729 16:51:02.118529    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:51:02.129096    4389 logs.go:276] 1 containers: [bf26227a9db1]
	I0729 16:51:02.129171    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:51:02.139986    4389 logs.go:276] 0 containers: []
	W0729 16:51:02.139999    4389 logs.go:278] No container was found matching "kindnet"
	I0729 16:51:02.140058    4389 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:51:02.150302    4389 logs.go:276] 1 containers: [cde0ae623702]
	I0729 16:51:02.150319    4389 logs.go:123] Gathering logs for kubelet ...
	I0729 16:51:02.150324    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:51:02.186131    4389 logs.go:123] Gathering logs for kube-apiserver [f3a95cd743ff] ...
	I0729 16:51:02.186143    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3a95cd743ff"
	I0729 16:51:02.200886    4389 logs.go:123] Gathering logs for coredns [a1846a41c074] ...
	I0729 16:51:02.200895    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1846a41c074"
	I0729 16:51:02.212525    4389 logs.go:123] Gathering logs for coredns [af28ca5a05f8] ...
	I0729 16:51:02.212539    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28ca5a05f8"
	I0729 16:51:02.223677    4389 logs.go:123] Gathering logs for Docker ...
	I0729 16:51:02.223687    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:51:02.248392    4389 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:51:02.248401    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:51:02.282832    4389 logs.go:123] Gathering logs for etcd [28e386923c44] ...
	I0729 16:51:02.282845    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e386923c44"
	I0729 16:51:02.298651    4389 logs.go:123] Gathering logs for coredns [4b491e173233] ...
	I0729 16:51:02.298667    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b491e173233"
	I0729 16:51:02.310740    4389 logs.go:123] Gathering logs for kube-scheduler [176fa8dbd10a] ...
	I0729 16:51:02.310752    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 176fa8dbd10a"
	I0729 16:51:02.324930    4389 logs.go:123] Gathering logs for dmesg ...
	I0729 16:51:02.324942    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:51:02.329836    4389 logs.go:123] Gathering logs for kube-controller-manager [bf26227a9db1] ...
	I0729 16:51:02.329845    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf26227a9db1"
	I0729 16:51:02.346672    4389 logs.go:123] Gathering logs for storage-provisioner [cde0ae623702] ...
	I0729 16:51:02.346683    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde0ae623702"
	I0729 16:51:02.357987    4389 logs.go:123] Gathering logs for container status ...
	I0729 16:51:02.357998    4389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:51:02.369621    4389 logs.go:123] Gathering logs for coredns [8745002adc0d] ...
	I0729 16:51:02.369634    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8745002adc0d"
	I0729 16:51:02.381373    4389 logs.go:123] Gathering logs for kube-proxy [f58cc34de629] ...
	I0729 16:51:02.381383    4389 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f58cc34de629"
	I0729 16:51:05.776899    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:05.776941    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:04.895460    4389 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:09.897674    4389 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:09.902066    4389 out.go:177] 
	W0729 16:51:09.904973    4389 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 16:51:09.904980    4389 out.go:239] * 
	W0729 16:51:09.905556    4389 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:51:09.915965    4389 out.go:177] 
	I0729 16:51:10.777449    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:10.777525    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:51:11.160965    4568 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:51:11.165358    4568 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:51:11.174306    4568 addons.go:510] duration metric: took 30.517069292s for enable addons: enabled=[storage-provisioner]
	I0729 16:51:15.778179    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:15.778207    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:20.779058    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:20.779095    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 23:42:13 UTC, ends at Mon 2024-07-29 23:51:25 UTC. --
	Jul 29 23:51:10 running-upgrade-980000 dockerd[3158]: time="2024-07-29T23:51:10.726309933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:51:10 running-upgrade-980000 dockerd[3158]: time="2024-07-29T23:51:10.726364219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:51:10 running-upgrade-980000 dockerd[3158]: time="2024-07-29T23:51:10.726374552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:51:10 running-upgrade-980000 dockerd[3158]: time="2024-07-29T23:51:10.726432046Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a0e2f143b7e2c6efa4652e6d89702238532ee9e45c7eefb2ac971d8b3f5f1a86 pid=18458 runtime=io.containerd.runc.v2
	Jul 29 23:51:11 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:11Z" level=error msg="ContainerStats resp: {0x40004c0bc0 linux}"
	Jul 29 23:51:12 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:12Z" level=error msg="ContainerStats resp: {0x4000909340 linux}"
	Jul 29 23:51:12 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:12Z" level=error msg="ContainerStats resp: {0x4000909480 linux}"
	Jul 29 23:51:12 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:12Z" level=error msg="ContainerStats resp: {0x4000909c00 linux}"
	Jul 29 23:51:12 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:12Z" level=error msg="ContainerStats resp: {0x400024e0c0 linux}"
	Jul 29 23:51:12 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:12Z" level=error msg="ContainerStats resp: {0x400024e4c0 linux}"
	Jul 29 23:51:12 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:12Z" level=error msg="ContainerStats resp: {0x4000792b40 linux}"
	Jul 29 23:51:12 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:12Z" level=error msg="ContainerStats resp: {0x4000792d00 linux}"
	Jul 29 23:51:14 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 23:51:19 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 23:51:22 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:22Z" level=error msg="ContainerStats resp: {0x40007585c0 linux}"
	Jul 29 23:51:22 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:22Z" level=error msg="ContainerStats resp: {0x4000759740 linux}"
	Jul 29 23:51:23 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:23Z" level=error msg="ContainerStats resp: {0x40009093c0 linux}"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=error msg="ContainerStats resp: {0x4000893f80 linux}"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=error msg="ContainerStats resp: {0x400024e5c0 linux}"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=error msg="ContainerStats resp: {0x40004c0ec0 linux}"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=error msg="ContainerStats resp: {0x40004c12c0 linux}"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=error msg="ContainerStats resp: {0x40004c0040 linux}"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=error msg="ContainerStats resp: {0x40004c04c0 linux}"
	Jul 29 23:51:24 running-upgrade-980000 cri-dockerd[3000]: time="2024-07-29T23:51:24Z" level=error msg="ContainerStats resp: {0x400024e8c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a0e2f143b7e2c       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   96b260a53729a
	f22941a1f61f7       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   cbd6538f4f2f4
	8745002adc0d1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   96b260a53729a
	a1846a41c074f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   cbd6538f4f2f4
	f58cc34de6296       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   d0126b4674c16
	cde0ae6237020       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   7fd8d87b111cc
	176fa8dbd10a5       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   534bd5494ea7a
	bf26227a9db12       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   c574795b3e8ac
	28e386923c448       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   365a187cfd5ae
	f3a95cd743ff3       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   499b8d6e0c41d
	
	
	==> coredns [8745002adc0d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2781541317958827531.8198650318278848794. HINFO: read udp 10.244.0.3:56853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2781541317958827531.8198650318278848794. HINFO: read udp 10.244.0.3:58469->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2781541317958827531.8198650318278848794. HINFO: read udp 10.244.0.3:58638->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a0e2f143b7e2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4369127140055825483.5538707276504975899. HINFO: read udp 10.244.0.3:42260->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4369127140055825483.5538707276504975899. HINFO: read udp 10.244.0.3:47458->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4369127140055825483.5538707276504975899. HINFO: read udp 10.244.0.3:40167->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a1846a41c074] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:51473->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:45388->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:37754->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:50520->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:41727->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:37107->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:54688->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:50985->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:34972->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3305773775132710044.6334102894347885946. HINFO: read udp 10.244.0.2:43586->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f22941a1f61f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6159080957138889697.8324645416732398128. HINFO: read udp 10.244.0.2:34959->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6159080957138889697.8324645416732398128. HINFO: read udp 10.244.0.2:58839->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6159080957138889697.8324645416732398128. HINFO: read udp 10.244.0.2:43184->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-980000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-980000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3
	                    minikube.k8s.io/name=running-upgrade-980000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T16_47_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 23:47:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-980000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 23:51:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 23:47:09 +0000   Mon, 29 Jul 2024 23:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 23:47:09 +0000   Mon, 29 Jul 2024 23:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 23:47:09 +0000   Mon, 29 Jul 2024 23:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 23:47:09 +0000   Mon, 29 Jul 2024 23:47:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-980000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 78fb73dcb7f4480981df53d9bff65b31
	  System UUID:                78fb73dcb7f4480981df53d9bff65b31
	  Boot ID:                    2558bc39-208d-40f0-afb2-bea036008c53
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dn6ps                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-zg98r                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-980000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-980000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-980000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-z6bl8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-980000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-980000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-980000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-980000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-980000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-980000 event: Registered Node running-upgrade-980000 in Controller
	
	
	==> dmesg <==
	[  +1.690756] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.063778] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.059349] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.139060] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.070851] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +0.065314] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +2.037949] systemd-fstab-generator[1293]: Ignoring "noauto" for root device
	[  +8.176392] systemd-fstab-generator[1834]: Ignoring "noauto" for root device
	[  +2.665154] systemd-fstab-generator[2192]: Ignoring "noauto" for root device
	[  +0.133758] systemd-fstab-generator[2226]: Ignoring "noauto" for root device
	[  +0.073924] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.086622] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[ +12.500446] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.158936] systemd-fstab-generator[2957]: Ignoring "noauto" for root device
	[  +0.070999] systemd-fstab-generator[2968]: Ignoring "noauto" for root device
	[  +0.065312] systemd-fstab-generator[2979]: Ignoring "noauto" for root device
	[  +0.070201] systemd-fstab-generator[2993]: Ignoring "noauto" for root device
	[  +2.293932] systemd-fstab-generator[3145]: Ignoring "noauto" for root device
	[  +2.322285] systemd-fstab-generator[3496]: Ignoring "noauto" for root device
	[  +1.308988] systemd-fstab-generator[3778]: Ignoring "noauto" for root device
	[Jul29 23:43] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 23:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.130359] systemd-fstab-generator[11508]: Ignoring "noauto" for root device
	[  +5.109784] systemd-fstab-generator[12100]: Ignoring "noauto" for root device
	[  +0.473020] systemd-fstab-generator[12252]: Ignoring "noauto" for root device
	
	
	==> etcd [28e386923c44] <==
	{"level":"info","ts":"2024-07-29T23:47:05.066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T23:47:05.066Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T23:47:05.082Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T23:47:05.082Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T23:47:05.082Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T23:47:05.082Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T23:47:05.082Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-980000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T23:47:05.120Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:47:05.121Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-29T23:47:05.132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T23:47:05.132Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T23:47:05.132Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:47:05.132Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:51:26 up 9 min,  0 users,  load average: 0.72, 0.34, 0.15
	Linux running-upgrade-980000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f3a95cd743ff] <==
	I0729 23:47:06.632819       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 23:47:06.644375       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 23:47:06.644384       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 23:47:06.649739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 23:47:06.649829       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 23:47:06.649839       1 cache.go:39] Caches are synced for autoregister controller
	I0729 23:47:06.653955       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 23:47:07.376976       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 23:47:07.548472       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 23:47:07.555014       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 23:47:07.555188       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 23:47:07.709270       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 23:47:07.719920       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 23:47:07.821253       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 23:47:07.823300       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 23:47:07.823639       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 23:47:07.824836       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 23:47:08.699831       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 23:47:09.054262       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 23:47:09.057206       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 23:47:09.069760       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 23:47:09.118004       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 23:47:21.605120       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 23:47:22.304787       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 23:47:22.798998       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [bf26227a9db1] <==
	I0729 23:47:21.562729       1 range_allocator.go:173] Starting range CIDR allocator
	I0729 23:47:21.562734       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0729 23:47:21.562738       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0729 23:47:21.564076       1 shared_informer.go:262] Caches are synced for taint
	I0729 23:47:21.564156       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 23:47:21.564212       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-980000. Assuming now as a timestamp.
	I0729 23:47:21.564259       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 23:47:21.564377       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 23:47:21.564539       1 event.go:294] "Event occurred" object="running-upgrade-980000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-980000 event: Registered Node running-upgrade-980000 in Controller"
	I0729 23:47:21.565646       1 range_allocator.go:374] Set node running-upgrade-980000 PodCIDR to [10.244.0.0/24]
	I0729 23:47:21.567534       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 23:47:21.600016       1 shared_informer.go:262] Caches are synced for TTL
	I0729 23:47:21.606444       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 23:47:21.618627       1 shared_informer.go:262] Caches are synced for disruption
	I0729 23:47:21.618712       1 disruption.go:371] Sending events to api server.
	I0729 23:47:21.665314       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 23:47:21.753679       1 shared_informer.go:262] Caches are synced for HPA
	I0729 23:47:21.758675       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 23:47:21.776472       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 23:47:22.169890       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 23:47:22.185051       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 23:47:22.185123       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 23:47:22.307197       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z6bl8"
	I0729 23:47:22.557943       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zg98r"
	I0729 23:47:22.561104       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dn6ps"
	
	
	==> kube-proxy [f58cc34de629] <==
	I0729 23:47:22.785898       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 23:47:22.785924       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 23:47:22.785934       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 23:47:22.796897       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 23:47:22.796909       1 server_others.go:206] "Using iptables Proxier"
	I0729 23:47:22.796922       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 23:47:22.797008       1 server.go:661] "Version info" version="v1.24.1"
	I0729 23:47:22.797012       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 23:47:22.797269       1 config.go:317] "Starting service config controller"
	I0729 23:47:22.797275       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 23:47:22.797287       1 config.go:226] "Starting endpoint slice config controller"
	I0729 23:47:22.797288       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 23:47:22.798216       1 config.go:444] "Starting node config controller"
	I0729 23:47:22.798225       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 23:47:22.899815       1 shared_informer.go:262] Caches are synced for node config
	I0729 23:47:22.899817       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 23:47:22.899826       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [176fa8dbd10a] <==
	W0729 23:47:06.612685       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 23:47:06.612703       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 23:47:06.612742       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 23:47:06.612762       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 23:47:06.612786       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 23:47:06.612847       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 23:47:06.612879       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 23:47:06.612902       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 23:47:06.612937       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 23:47:06.612956       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 23:47:06.612979       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 23:47:06.613012       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 23:47:06.613037       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 23:47:06.613054       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 23:47:06.613078       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 23:47:06.613115       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 23:47:06.613139       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 23:47:06.613165       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 23:47:06.613297       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 23:47:06.613331       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 23:47:07.536081       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 23:47:07.536194       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 23:47:07.573477       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 23:47:07.573513       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0729 23:47:07.909810       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 23:42:13 UTC, ends at Mon 2024-07-29 23:51:26 UTC. --
	Jul 29 23:47:09 running-upgrade-980000 kubelet[12106]: E0729 23:47:09.690283   12106 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-980000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-980000"
	Jul 29 23:47:10 running-upgrade-980000 kubelet[12106]: I0729 23:47:10.102123   12106 apiserver.go:52] "Watching apiserver"
	Jul 29 23:47:10 running-upgrade-980000 kubelet[12106]: I0729 23:47:10.525500   12106 reconciler.go:157] "Reconciler: start to sync state"
	Jul 29 23:47:10 running-upgrade-980000 kubelet[12106]: E0729 23:47:10.691173   12106 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-980000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-980000"
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: I0729 23:47:21.570076   12106 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: I0729 23:47:21.630108   12106 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: I0729 23:47:21.630185   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89cs7\" (UniqueName: \"kubernetes.io/projected/2bcb0022-8510-42ac-aa35-5fdc6614256a-kube-api-access-89cs7\") pod \"storage-provisioner\" (UID: \"2bcb0022-8510-42ac-aa35-5fdc6614256a\") " pod="kube-system/storage-provisioner"
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: I0729 23:47:21.630213   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2bcb0022-8510-42ac-aa35-5fdc6614256a-tmp\") pod \"storage-provisioner\" (UID: \"2bcb0022-8510-42ac-aa35-5fdc6614256a\") " pod="kube-system/storage-provisioner"
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: I0729 23:47:21.630453   12106 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: E0729 23:47:21.733538   12106 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: E0729 23:47:21.733558   12106 projected.go:192] Error preparing data for projected volume kube-api-access-89cs7 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 23:47:21 running-upgrade-980000 kubelet[12106]: E0729 23:47:21.733594   12106 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2bcb0022-8510-42ac-aa35-5fdc6614256a-kube-api-access-89cs7 podName:2bcb0022-8510-42ac-aa35-5fdc6614256a nodeName:}" failed. No retries permitted until 2024-07-29 23:47:22.233580217 +0000 UTC m=+13.187864837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-89cs7" (UniqueName: "kubernetes.io/projected/2bcb0022-8510-42ac-aa35-5fdc6614256a-kube-api-access-89cs7") pod "storage-provisioner" (UID: "2bcb0022-8510-42ac-aa35-5fdc6614256a") : configmap "kube-root-ca.crt" not found
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.309947   12106 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.450315   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxgk\" (UniqueName: \"kubernetes.io/projected/7bf77b9a-f3ff-4b50-a169-8d67bb1b6641-kube-api-access-pkxgk\") pod \"kube-proxy-z6bl8\" (UID: \"7bf77b9a-f3ff-4b50-a169-8d67bb1b6641\") " pod="kube-system/kube-proxy-z6bl8"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.450353   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bf77b9a-f3ff-4b50-a169-8d67bb1b6641-kube-proxy\") pod \"kube-proxy-z6bl8\" (UID: \"7bf77b9a-f3ff-4b50-a169-8d67bb1b6641\") " pod="kube-system/kube-proxy-z6bl8"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.450366   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bf77b9a-f3ff-4b50-a169-8d67bb1b6641-xtables-lock\") pod \"kube-proxy-z6bl8\" (UID: \"7bf77b9a-f3ff-4b50-a169-8d67bb1b6641\") " pod="kube-system/kube-proxy-z6bl8"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.450377   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bf77b9a-f3ff-4b50-a169-8d67bb1b6641-lib-modules\") pod \"kube-proxy-z6bl8\" (UID: \"7bf77b9a-f3ff-4b50-a169-8d67bb1b6641\") " pod="kube-system/kube-proxy-z6bl8"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.563876   12106 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.565433   12106 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.651336   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40a9114a-26b9-4bda-86ed-ffd8f42172a1-config-volume\") pod \"coredns-6d4b75cb6d-zg98r\" (UID: \"40a9114a-26b9-4bda-86ed-ffd8f42172a1\") " pod="kube-system/coredns-6d4b75cb6d-zg98r"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.651364   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bvh\" (UniqueName: \"kubernetes.io/projected/7c898e25-fa3e-46e6-bd32-8715f6146595-kube-api-access-h8bvh\") pod \"coredns-6d4b75cb6d-dn6ps\" (UID: \"7c898e25-fa3e-46e6-bd32-8715f6146595\") " pod="kube-system/coredns-6d4b75cb6d-dn6ps"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.651376   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vzlk\" (UniqueName: \"kubernetes.io/projected/40a9114a-26b9-4bda-86ed-ffd8f42172a1-kube-api-access-6vzlk\") pod \"coredns-6d4b75cb6d-zg98r\" (UID: \"40a9114a-26b9-4bda-86ed-ffd8f42172a1\") " pod="kube-system/coredns-6d4b75cb6d-zg98r"
	Jul 29 23:47:22 running-upgrade-980000 kubelet[12106]: I0729 23:47:22.651386   12106 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c898e25-fa3e-46e6-bd32-8715f6146595-config-volume\") pod \"coredns-6d4b75cb6d-dn6ps\" (UID: \"7c898e25-fa3e-46e6-bd32-8715f6146595\") " pod="kube-system/coredns-6d4b75cb6d-dn6ps"
	Jul 29 23:51:11 running-upgrade-980000 kubelet[12106]: I0729 23:51:11.376283   12106 scope.go:110] "RemoveContainer" containerID="af28ca5a05f8f910aa0fecfc23049dfb518daf2debce49846875b9c2fc5220c9"
	Jul 29 23:51:11 running-upgrade-980000 kubelet[12106]: I0729 23:51:11.393151   12106 scope.go:110] "RemoveContainer" containerID="4b491e17323394f3ea0f7e68efca6a44edd6c23e2322acaf99af2264ef1f3377"
	
	
	==> storage-provisioner [cde0ae623702] <==
	I0729 23:47:22.696349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 23:47:22.701490       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 23:47:22.701550       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 23:47:22.705280       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 23:47:22.705334       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-980000_ccc4fc3b-2074-469a-9e1e-3f639472d9cf!
	I0729 23:47:22.711487       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4183801-4d67-4992-a5d6-112dc3b39caa", APIVersion:"v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-980000_ccc4fc3b-2074-469a-9e1e-3f639472d9cf became leader
	I0729 23:47:22.805500       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-980000_ccc4fc3b-2074-469a-9e1e-3f639472d9cf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-980000 -n running-upgrade-980000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-980000 -n running-upgrade-980000: exit status 2 (15.658792042s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-980000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-980000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-980000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-980000: (1.330696333s)
--- FAIL: TestRunningBinaryUpgrade (593.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-569000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-569000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.912666542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-569000" primary control-plane node in "kubernetes-upgrade-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:44:50.266391    4482 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:44:50.266530    4482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:50.266533    4482 out.go:304] Setting ErrFile to fd 2...
	I0729 16:44:50.266535    4482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:50.266652    4482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:44:50.267811    4482 out.go:298] Setting JSON to false
	I0729 16:44:50.284403    4482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2653,"bootTime":1722294037,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:44:50.284481    4482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:44:50.289729    4482 out.go:177] * [kubernetes-upgrade-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:44:50.297703    4482 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:44:50.297753    4482 notify.go:220] Checking for updates...
	I0729 16:44:50.303637    4482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:44:50.306780    4482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:44:50.309810    4482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:44:50.312644    4482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:44:50.315676    4482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:44:50.319019    4482 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:44:50.319085    4482 config.go:182] Loaded profile config "running-upgrade-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:44:50.319140    4482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:44:50.323687    4482 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:44:50.330660    4482 start.go:297] selected driver: qemu2
	I0729 16:44:50.330667    4482 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:44:50.330674    4482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:44:50.333070    4482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:44:50.335630    4482 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:44:50.338770    4482 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:44:50.338790    4482 cni.go:84] Creating CNI manager for ""
	I0729 16:44:50.338799    4482 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:44:50.338822    4482 start.go:340] cluster config:
	{Name:kubernetes-upgrade-569000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:44:50.342671    4482 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:44:50.350672    4482 out.go:177] * Starting "kubernetes-upgrade-569000" primary control-plane node in "kubernetes-upgrade-569000" cluster
	I0729 16:44:50.354418    4482 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:44:50.354433    4482 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:44:50.354445    4482 cache.go:56] Caching tarball of preloaded images
	I0729 16:44:50.354503    4482 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:44:50.354509    4482 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:44:50.354562    4482 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/kubernetes-upgrade-569000/config.json ...
	I0729 16:44:50.354574    4482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/kubernetes-upgrade-569000/config.json: {Name:mk57febbd2cf84b0f3f7dd9d24f4fad88b471179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:44:50.354913    4482 start.go:360] acquireMachinesLock for kubernetes-upgrade-569000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:50.354955    4482 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "kubernetes-upgrade-569000"
	I0729 16:44:50.354967    4482 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:50.355001    4482 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:50.363666    4482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:44:50.381236    4482 start.go:159] libmachine.API.Create for "kubernetes-upgrade-569000" (driver="qemu2")
	I0729 16:44:50.381267    4482 client.go:168] LocalClient.Create starting
	I0729 16:44:50.381353    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:44:50.381390    4482 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:50.381401    4482 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:50.381443    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:44:50.381467    4482 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:50.381479    4482 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:50.381885    4482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:50.537555    4482 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:50.661936    4482 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:50.661943    4482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:50.662154    4482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:44:50.671714    4482 main.go:141] libmachine: STDOUT: 
	I0729 16:44:50.671731    4482 main.go:141] libmachine: STDERR: 
	I0729 16:44:50.671787    4482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2 +20000M
	I0729 16:44:50.680024    4482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:50.680039    4482 main.go:141] libmachine: STDERR: 
	I0729 16:44:50.680053    4482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:44:50.680058    4482 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:50.680070    4482 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:50.680096    4482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:b1:99:10:53:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:44:50.681746    4482 main.go:141] libmachine: STDOUT: 
	I0729 16:44:50.681761    4482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:50.681779    4482 client.go:171] duration metric: took 300.512667ms to LocalClient.Create
	I0729 16:44:52.683958    4482 start.go:128] duration metric: took 2.32895975s to createHost
	I0729 16:44:52.684049    4482 start.go:83] releasing machines lock for "kubernetes-upgrade-569000", held for 2.329117709s
	W0729 16:44:52.684104    4482 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:52.702295    4482 out.go:177] * Deleting "kubernetes-upgrade-569000" in qemu2 ...
	W0729 16:44:52.731616    4482 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:52.731649    4482 start.go:729] Will try again in 5 seconds ...
	I0729 16:44:57.733849    4482 start.go:360] acquireMachinesLock for kubernetes-upgrade-569000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:57.734376    4482 start.go:364] duration metric: took 410.75µs to acquireMachinesLock for "kubernetes-upgrade-569000"
	I0729 16:44:57.734541    4482 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:57.734818    4482 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:57.740536    4482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:44:57.792561    4482 start.go:159] libmachine.API.Create for "kubernetes-upgrade-569000" (driver="qemu2")
	I0729 16:44:57.792612    4482 client.go:168] LocalClient.Create starting
	I0729 16:44:57.792771    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:44:57.792841    4482 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:57.792859    4482 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:57.792916    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:44:57.792961    4482 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:57.792973    4482 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:57.793515    4482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:57.978638    4482 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:58.094376    4482 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:58.094382    4482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:58.094573    4482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:44:58.103871    4482 main.go:141] libmachine: STDOUT: 
	I0729 16:44:58.103891    4482 main.go:141] libmachine: STDERR: 
	I0729 16:44:58.103959    4482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2 +20000M
	I0729 16:44:58.111740    4482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:58.111755    4482 main.go:141] libmachine: STDERR: 
	I0729 16:44:58.111777    4482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:44:58.111780    4482 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:58.111792    4482 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:58.111823    4482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c7:80:4b:db:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:44:58.113427    4482 main.go:141] libmachine: STDOUT: 
	I0729 16:44:58.113443    4482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:58.113456    4482 client.go:171] duration metric: took 320.844708ms to LocalClient.Create
	I0729 16:45:00.115604    4482 start.go:128] duration metric: took 2.380791416s to createHost
	I0729 16:45:00.115669    4482 start.go:83] releasing machines lock for "kubernetes-upgrade-569000", held for 2.381305666s
	W0729 16:45:00.115848    4482 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:00.123218    4482 out.go:177] 
	W0729 16:45:00.129174    4482 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:00.129189    4482 out.go:239] * 
	* 
	W0729 16:45:00.130557    4482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:00.141153    4482 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-569000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-569000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-569000: (3.695438125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-569000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-569000 status --format={{.Host}}: exit status 7 (31.239834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-569000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
E0729 16:45:04.385503    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-569000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.180042916s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-569000" primary control-plane node in "kubernetes-upgrade-569000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-569000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-569000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:03.908238    4521 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:03.908378    4521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:03.908385    4521 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:03.908387    4521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:03.908521    4521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:45:03.909603    4521 out.go:298] Setting JSON to false
	I0729 16:45:03.926224    4521 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2666,"bootTime":1722294037,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:45:03.926303    4521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:45:03.929651    4521 out.go:177] * [kubernetes-upgrade-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:45:03.936552    4521 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:45:03.936637    4521 notify.go:220] Checking for updates...
	I0729 16:45:03.943678    4521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:45:03.946603    4521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:45:03.949631    4521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:45:03.952663    4521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:45:03.953968    4521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:45:03.956968    4521 config.go:182] Loaded profile config "kubernetes-upgrade-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 16:45:03.957237    4521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:45:03.961628    4521 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:45:03.966684    4521 start.go:297] selected driver: qemu2
	I0729 16:45:03.966692    4521 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:03.966759    4521 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:45:03.969023    4521 cni.go:84] Creating CNI manager for ""
	I0729 16:45:03.969036    4521 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:45:03.969072    4521 start.go:340] cluster config:
	{Name:kubernetes-upgrade-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-569000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:03.972435    4521 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:03.980630    4521 out.go:177] * Starting "kubernetes-upgrade-569000" primary control-plane node in "kubernetes-upgrade-569000" cluster
	I0729 16:45:03.984641    4521 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:45:03.984656    4521 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:45:03.984665    4521 cache.go:56] Caching tarball of preloaded images
	I0729 16:45:03.984739    4521 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:45:03.984744    4521 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 16:45:03.984787    4521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/kubernetes-upgrade-569000/config.json ...
	I0729 16:45:03.985115    4521 start.go:360] acquireMachinesLock for kubernetes-upgrade-569000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:03.985142    4521 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "kubernetes-upgrade-569000"
	I0729 16:45:03.985152    4521 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:45:03.985157    4521 fix.go:54] fixHost starting: 
	I0729 16:45:03.985266    4521 fix.go:112] recreateIfNeeded on kubernetes-upgrade-569000: state=Stopped err=<nil>
	W0729 16:45:03.985273    4521 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:45:03.993667    4521 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-569000" ...
	I0729 16:45:03.997713    4521 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:03.997741    4521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c7:80:4b:db:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:45:03.999593    4521 main.go:141] libmachine: STDOUT: 
	I0729 16:45:03.999609    4521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:03.999634    4521 fix.go:56] duration metric: took 14.4775ms for fixHost
	I0729 16:45:03.999638    4521 start.go:83] releasing machines lock for "kubernetes-upgrade-569000", held for 14.4925ms
	W0729 16:45:03.999644    4521 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:03.999671    4521 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:03.999675    4521 start.go:729] Will try again in 5 seconds ...
	I0729 16:45:09.001899    4521 start.go:360] acquireMachinesLock for kubernetes-upgrade-569000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:09.002417    4521 start.go:364] duration metric: took 397.625µs to acquireMachinesLock for "kubernetes-upgrade-569000"
	I0729 16:45:09.002576    4521 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:45:09.002595    4521 fix.go:54] fixHost starting: 
	I0729 16:45:09.003188    4521 fix.go:112] recreateIfNeeded on kubernetes-upgrade-569000: state=Stopped err=<nil>
	W0729 16:45:09.003210    4521 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:45:09.009872    4521 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-569000" ...
	I0729 16:45:09.013796    4521 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:09.014061    4521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c7:80:4b:db:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubernetes-upgrade-569000/disk.qcow2
	I0729 16:45:09.023807    4521 main.go:141] libmachine: STDOUT: 
	I0729 16:45:09.023864    4521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:09.023942    4521 fix.go:56] duration metric: took 21.350708ms for fixHost
	I0729 16:45:09.023957    4521 start.go:83] releasing machines lock for "kubernetes-upgrade-569000", held for 21.519417ms
	W0729 16:45:09.024122    4521 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-569000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-569000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:09.032638    4521 out.go:177] 
	W0729 16:45:09.035869    4521 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:09.035897    4521 out.go:239] * 
	* 
	W0729 16:45:09.038388    4521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:09.046803    4521 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-569000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-569000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-569000 version --output=json: exit status 1 (64.803917ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-569000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 16:45:09.126928 -0700 PDT m=+2539.236346210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-569000 -n kubernetes-upgrade-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-569000 -n kubernetes-upgrade-569000: exit status 7 (32.524167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-569000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-569000
--- FAIL: TestKubernetesUpgrade (19.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.81s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19347
- KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3027907703/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.81s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.58s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19347
- KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2252811394/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (571.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1244329748 start -p stopped-upgrade-480000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1244329748 start -p stopped-upgrade-480000 --memory=2200 --vm-driver=qemu2 : (38.122763083s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1244329748 -p stopped-upgrade-480000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1244329748 -p stopped-upgrade-480000 stop: (12.121687792s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-480000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 16:47:01.310660    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:48:41.908034    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-480000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.15515225s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-480000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-480000" primary control-plane node in "stopped-upgrade-480000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-480000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:00.801385    4568 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:00.801552    4568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:00.801558    4568 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:00.801561    4568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:00.801724    4568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:46:00.803042    4568 out.go:298] Setting JSON to false
	I0729 16:46:00.823037    4568 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2723,"bootTime":1722294037,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:46:00.823119    4568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:00.827104    4568 out.go:177] * [stopped-upgrade-480000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:00.835012    4568 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:46:00.835073    4568 notify.go:220] Checking for updates...
	I0729 16:46:00.841958    4568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:46:00.845009    4568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:00.848019    4568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:00.850992    4568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:46:00.853963    4568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:00.857308    4568 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:46:00.859919    4568 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:46:00.862989    4568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:00.866971    4568 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:46:00.874018    4568 start.go:297] selected driver: qemu2
	I0729 16:46:00.874026    4568 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:46:00.874100    4568 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:00.876740    4568 cni.go:84] Creating CNI manager for ""
	I0729 16:46:00.876755    4568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:00.876781    4568 start.go:340] cluster config:
	{Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:46:00.876832    4568 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:00.882931    4568 out.go:177] * Starting "stopped-upgrade-480000" primary control-plane node in "stopped-upgrade-480000" cluster
	I0729 16:46:00.887027    4568 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:46:00.887042    4568 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 16:46:00.887052    4568 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:00.887108    4568 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:00.887113    4568 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 16:46:00.887169    4568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/config.json ...
	I0729 16:46:00.887582    4568 start.go:360] acquireMachinesLock for stopped-upgrade-480000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:00.887614    4568 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "stopped-upgrade-480000"
	I0729 16:46:00.887623    4568 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:00.887628    4568 fix.go:54] fixHost starting: 
	I0729 16:46:00.887729    4568 fix.go:112] recreateIfNeeded on stopped-upgrade-480000: state=Stopped err=<nil>
	W0729 16:46:00.887736    4568 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:00.895954    4568 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-480000" ...
	I0729 16:46:00.899820    4568 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:00.899880    4568 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50473-:22,hostfwd=tcp::50474-:2376,hostname=stopped-upgrade-480000 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/disk.qcow2
	I0729 16:46:00.944384    4568 main.go:141] libmachine: STDOUT: 
	I0729 16:46:00.944413    4568 main.go:141] libmachine: STDERR: 
	I0729 16:46:00.944419    4568 main.go:141] libmachine: Waiting for VM to start (ssh -p 50473 docker@127.0.0.1)...
	I0729 16:46:21.045928    4568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/config.json ...
	I0729 16:46:21.046697    4568 machine.go:94] provisionDockerMachine start ...
	I0729 16:46:21.046857    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.047319    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.047333    4568 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 16:46:21.125876    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 16:46:21.125913    4568 buildroot.go:166] provisioning hostname "stopped-upgrade-480000"
	I0729 16:46:21.126034    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.126269    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.126282    4568 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-480000 && echo "stopped-upgrade-480000" | sudo tee /etc/hostname
	I0729 16:46:21.196331    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-480000
	
	I0729 16:46:21.196442    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.196652    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.196666    4568 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-480000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-480000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-480000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:46:21.255165    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:46:21.255178    4568 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19347-923/.minikube CaCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19347-923/.minikube}
	I0729 16:46:21.255187    4568 buildroot.go:174] setting up certificates
	I0729 16:46:21.255191    4568 provision.go:84] configureAuth start
	I0729 16:46:21.255199    4568 provision.go:143] copyHostCerts
	I0729 16:46:21.255270    4568 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem, removing ...
	I0729 16:46:21.255276    4568 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem
	I0729 16:46:21.255383    4568 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/key.pem (1679 bytes)
	I0729 16:46:21.255559    4568 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem, removing ...
	I0729 16:46:21.255563    4568 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem
	I0729 16:46:21.255614    4568 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/ca.pem (1082 bytes)
	I0729 16:46:21.255708    4568 exec_runner.go:144] found /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem, removing ...
	I0729 16:46:21.255711    4568 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem
	I0729 16:46:21.255759    4568 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19347-923/.minikube/cert.pem (1123 bytes)
	I0729 16:46:21.255844    4568 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-480000 san=[127.0.0.1 localhost minikube stopped-upgrade-480000]
	I0729 16:46:21.318570    4568 provision.go:177] copyRemoteCerts
	I0729 16:46:21.318606    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:46:21.318613    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:46:21.346264    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 16:46:21.352838    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 16:46:21.359350    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 16:46:21.366489    4568 provision.go:87] duration metric: took 111.29575ms to configureAuth
	I0729 16:46:21.366497    4568 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:46:21.366598    4568 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:46:21.366642    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.366726    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.366730    4568 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 16:46:21.416423    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 16:46:21.416433    4568 buildroot.go:70] root file system type: tmpfs
	I0729 16:46:21.416483    4568 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 16:46:21.416526    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.416628    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.416662    4568 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 16:46:21.471806    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 16:46:21.471850    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.471954    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.471964    4568 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 16:46:21.821203    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 16:46:21.821217    4568 machine.go:97] duration metric: took 774.520583ms to provisionDockerMachine
	I0729 16:46:21.821231    4568 start.go:293] postStartSetup for "stopped-upgrade-480000" (driver="qemu2")
	I0729 16:46:21.821238    4568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:46:21.821308    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:46:21.821319    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:46:21.849137    4568 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:46:21.850597    4568 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 16:46:21.850604    4568 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/addons for local assets ...
	I0729 16:46:21.850693    4568 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19347-923/.minikube/files for local assets ...
	I0729 16:46:21.850807    4568 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem -> 13902.pem in /etc/ssl/certs
	I0729 16:46:21.850933    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 16:46:21.853842    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:46:21.860475    4568 start.go:296] duration metric: took 39.23925ms for postStartSetup
	I0729 16:46:21.860491    4568 fix.go:56] duration metric: took 20.973163542s for fixHost
	I0729 16:46:21.860525    4568 main.go:141] libmachine: Using SSH client type: native
	I0729 16:46:21.860634    4568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10074aa10] 0x10074d270 <nil>  [] 0s} localhost 50473 <nil> <nil>}
	I0729 16:46:21.860638    4568 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 16:46:21.911705    4568 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722296782.197289837
	
	I0729 16:46:21.911712    4568 fix.go:216] guest clock: 1722296782.197289837
	I0729 16:46:21.911716    4568 fix.go:229] Guest: 2024-07-29 16:46:22.197289837 -0700 PDT Remote: 2024-07-29 16:46:21.860493 -0700 PDT m=+21.091105501 (delta=336.796837ms)
	I0729 16:46:21.911727    4568 fix.go:200] guest clock delta is within tolerance: 336.796837ms
	I0729 16:46:21.911729    4568 start.go:83] releasing machines lock for "stopped-upgrade-480000", held for 21.024412208s
	I0729 16:46:21.911784    4568 ssh_runner.go:195] Run: cat /version.json
	I0729 16:46:21.911795    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:46:21.911784    4568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:46:21.911836    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	W0729 16:46:21.912363    4568 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50473: connect: connection refused
	I0729 16:46:21.912384    4568 retry.go:31] will retry after 159.743001ms: dial tcp [::1]:50473: connect: connection refused
	W0729 16:46:22.104824    4568 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 16:46:22.104914    4568 ssh_runner.go:195] Run: systemctl --version
	I0729 16:46:22.107118    4568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:46:22.109140    4568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:46:22.109176    4568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 16:46:22.112413    4568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 16:46:22.117495    4568 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:46:22.117503    4568 start.go:495] detecting cgroup driver to use...
	I0729 16:46:22.117577    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:46:22.124380    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 16:46:22.127464    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 16:46:22.130432    4568 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 16:46:22.130456    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 16:46:22.133672    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:46:22.136381    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 16:46:22.139419    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:46:22.142451    4568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:46:22.145410    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 16:46:22.148134    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 16:46:22.151332    4568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 16:46:22.154685    4568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:46:22.157441    4568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:46:22.160077    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:22.237440    4568 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 16:46:22.243729    4568 start.go:495] detecting cgroup driver to use...
	I0729 16:46:22.243815    4568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 16:46:22.249080    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:46:22.253861    4568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:46:22.266372    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:46:22.270926    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:46:22.275479    4568 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 16:46:22.335767    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:46:22.341348    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:46:22.347117    4568 ssh_runner.go:195] Run: which cri-dockerd
	I0729 16:46:22.348465    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 16:46:22.351350    4568 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 16:46:22.356064    4568 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 16:46:22.447598    4568 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 16:46:22.536018    4568 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 16:46:22.536087    4568 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 16:46:22.541531    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:22.619354    4568 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:46:23.744162    4568 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.124805041s)
	I0729 16:46:23.744228    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 16:46:23.748505    4568 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 16:46:23.754326    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:46:23.759119    4568 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 16:46:23.843181    4568 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 16:46:23.915397    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:23.998937    4568 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 16:46:24.004876    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:46:24.009454    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:24.086208    4568 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 16:46:24.124729    4568 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 16:46:24.124815    4568 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 16:46:24.127075    4568 start.go:563] Will wait 60s for crictl version
	I0729 16:46:24.127127    4568 ssh_runner.go:195] Run: which crictl
	I0729 16:46:24.128514    4568 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:46:24.142958    4568 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 16:46:24.143038    4568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:46:24.159255    4568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:46:24.180637    4568 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 16:46:24.180705    4568 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 16:46:24.181995    4568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:46:24.185558    4568 kubeadm.go:883] updating cluster {Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 16:46:24.185605    4568 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:46:24.185643    4568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:46:24.196013    4568 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:46:24.196021    4568 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:46:24.196067    4568 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:46:24.199501    4568 ssh_runner.go:195] Run: which lz4
	I0729 16:46:24.200810    4568 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 16:46:24.201962    4568 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:46:24.201974    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 16:46:25.120696    4568 docker.go:649] duration metric: took 919.926292ms to copy over tarball
	I0729 16:46:25.120770    4568 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:46:26.284532    4568 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.163763834s)
	I0729 16:46:26.284545    4568 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:46:26.299843    4568 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:46:26.303168    4568 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 16:46:26.307827    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:26.387394    4568 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:46:27.958374    4568 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.570984792s)
	I0729 16:46:27.958485    4568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:46:27.973307    4568 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:46:27.973316    4568 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:46:27.973321    4568 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 16:46:27.979149    4568 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:27.980799    4568 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:27.981997    4568 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:27.982109    4568 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:27.983503    4568 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:27.983704    4568 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:27.984773    4568 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:27.984903    4568 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:27.986189    4568 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:27.986212    4568 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:27.987872    4568 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:27.988155    4568 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:46:27.989293    4568 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:27.989723    4568 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:27.990736    4568 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:46:27.991358    4568 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.406798    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:28.417247    4568 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 16:46:28.417281    4568 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:28.417349    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:46:28.428035    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 16:46:28.436288    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0729 16:46:28.437849    4568 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:46:28.437929    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:28.438015    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:28.440931    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:28.448746    4568 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 16:46:28.448769    4568 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:28.448841    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:46:28.470447    4568 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 16:46:28.470468    4568 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 16:46:28.470472    4568 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:28.470479    4568 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:28.470527    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:46:28.470542    4568 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 16:46:28.470552    4568 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:28.470527    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:46:28.470544    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 16:46:28.470581    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:46:28.483503    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 16:46:28.487780    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 16:46:28.489525    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:46:28.489591    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 16:46:28.489641    4568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:46:28.497384    4568 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 16:46:28.497422    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 16:46:28.497463    4568 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 16:46:28.497486    4568 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 16:46:28.497523    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 16:46:28.513261    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.513924    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:46:28.514021    4568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 16:46:28.542554    4568 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 16:46:28.542557    4568 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 16:46:28.542585    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 16:46:28.542593    4568 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.542641    4568 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 16:46:28.559495    4568 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:46:28.559518    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 16:46:28.566671    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:46:28.602130    4568 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 16:46:28.602151    4568 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 16:46:28.602157    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0729 16:46:28.615204    4568 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:46:28.615321    4568 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:28.636059    4568 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 16:46:28.636102    4568 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 16:46:28.636120    4568 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:28.636177    4568 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:46:28.649715    4568 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:46:28.649840    4568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:46:28.651160    4568 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 16:46:28.651171    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 16:46:28.679826    4568 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:46:28.679848    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 16:46:28.916624    4568 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 16:46:28.916661    4568 cache_images.go:92] duration metric: took 943.347959ms to LoadCachedImages
	W0729 16:46:28.916703    4568 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 16:46:28.916709    4568 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 16:46:28.916759    4568 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-480000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:46:28.916820    4568 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 16:46:28.936604    4568 cni.go:84] Creating CNI manager for ""
	I0729 16:46:28.936620    4568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:28.936624    4568 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:46:28.936633    4568 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-480000 NodeName:stopped-upgrade-480000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:46:28.936697    4568 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-480000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:46:28.936756    4568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 16:46:28.939669    4568 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:46:28.939701    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:46:28.942711    4568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 16:46:28.947869    4568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:46:28.952848    4568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 16:46:28.957934    4568 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 16:46:28.959096    4568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:46:28.962999    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:46:29.048794    4568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:46:29.053878    4568 certs.go:68] Setting up /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000 for IP: 10.0.2.15
	I0729 16:46:29.053890    4568 certs.go:194] generating shared ca certs ...
	I0729 16:46:29.053899    4568 certs.go:226] acquiring lock for ca certs: {Name:mk4279a132dfe000316d0221b0d97d4e537df506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.054074    4568 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key
	I0729 16:46:29.054110    4568 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key
	I0729 16:46:29.054117    4568 certs.go:256] generating profile certs ...
	I0729 16:46:29.054178    4568 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.key
	I0729 16:46:29.054196    4568 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295
	I0729 16:46:29.054205    4568 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 16:46:29.170842    4568 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295 ...
	I0729 16:46:29.170853    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295: {Name:mke6eca6bee11c09e4ec4e59ab31263d0485cd20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.171107    4568 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295 ...
	I0729 16:46:29.171112    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295: {Name:mk62bbe6b816963ecc85c7b294289074aed7a646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.171239    4568 certs.go:381] copying /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt.35715295 -> /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt
	I0729 16:46:29.171359    4568 certs.go:385] copying /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key.35715295 -> /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key
	I0729 16:46:29.171478    4568 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/proxy-client.key
	I0729 16:46:29.171598    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem (1338 bytes)
	W0729 16:46:29.171620    4568 certs.go:480] ignoring /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390_empty.pem, impossibly tiny 0 bytes
	I0729 16:46:29.171625    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 16:46:29.171643    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem (1082 bytes)
	I0729 16:46:29.171661    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:46:29.171680    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/certs/key.pem (1679 bytes)
	I0729 16:46:29.171718    4568 certs.go:484] found cert: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem (1708 bytes)
	I0729 16:46:29.172025    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:46:29.178970    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:46:29.185733    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:46:29.192428    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 16:46:29.199051    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 16:46:29.206033    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:46:29.212621    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:46:29.219192    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 16:46:29.226456    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:46:29.233043    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/certs/1390.pem --> /usr/share/ca-certificates/1390.pem (1338 bytes)
	I0729 16:46:29.239453    4568 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/ssl/certs/13902.pem --> /usr/share/ca-certificates/13902.pem (1708 bytes)
	I0729 16:46:29.246556    4568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:46:29.251721    4568 ssh_runner.go:195] Run: openssl version
	I0729 16:46:29.253449    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:46:29.256134    4568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:46:29.257616    4568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:46:29.257645    4568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:46:29.259246    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:46:29.262505    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1390.pem && ln -fs /usr/share/ca-certificates/1390.pem /etc/ssl/certs/1390.pem"
	I0729 16:46:29.265613    4568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1390.pem
	I0729 16:46:29.267120    4568 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:11 /usr/share/ca-certificates/1390.pem
	I0729 16:46:29.267143    4568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1390.pem
	I0729 16:46:29.268990    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1390.pem /etc/ssl/certs/51391683.0"
	I0729 16:46:29.271766    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13902.pem && ln -fs /usr/share/ca-certificates/13902.pem /etc/ssl/certs/13902.pem"
	I0729 16:46:29.274990    4568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13902.pem
	I0729 16:46:29.276421    4568 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:11 /usr/share/ca-certificates/13902.pem
	I0729 16:46:29.276439    4568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13902.pem
	I0729 16:46:29.278175    4568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13902.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 16:46:29.280898    4568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:46:29.282168    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 16:46:29.284100    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 16:46:29.286057    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 16:46:29.287939    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 16:46:29.290040    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 16:46:29.291884    4568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 16:46:29.293983    4568 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-480000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50508 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-480000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:46:29.294051    4568 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:46:29.304453    4568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:46:29.307543    4568 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 16:46:29.307549    4568 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 16:46:29.307575    4568 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 16:46:29.310381    4568 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:46:29.310655    4568 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-480000" does not appear in /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:46:29.310752    4568 kubeconfig.go:62] /Users/jenkins/minikube-integration/19347-923/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-480000" cluster setting kubeconfig missing "stopped-upgrade-480000" context setting]
	I0729 16:46:29.310929    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:29.311352    4568 kapi.go:59] client config for stopped-upgrade-480000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.key", CAFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ae0080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:46:29.311665    4568 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 16:46:29.314298    4568 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-480000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 16:46:29.314303    4568 kubeadm.go:1160] stopping kube-system containers ...
	I0729 16:46:29.314340    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:46:29.324215    4568 docker.go:483] Stopping containers: [ea007e6b4743 4866a9c899c6 6b64e4a0a495 df1f20080bd7 405fef0e15b0 bcd664408a20 2aa835c9fd1e a7d1fe2e3558]
	I0729 16:46:29.324282    4568 ssh_runner.go:195] Run: docker stop ea007e6b4743 4866a9c899c6 6b64e4a0a495 df1f20080bd7 405fef0e15b0 bcd664408a20 2aa835c9fd1e a7d1fe2e3558
	I0729 16:46:29.334668    4568 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 16:46:29.340127    4568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:46:29.343205    4568 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:46:29.343214    4568 kubeadm.go:157] found existing configuration files:
	
	I0729 16:46:29.343233    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf
	I0729 16:46:29.346224    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:46:29.346248    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:46:29.348849    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf
	I0729 16:46:29.351306    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:46:29.351331    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:46:29.354309    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf
	I0729 16:46:29.356958    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:46:29.356980    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:46:29.359450    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf
	I0729 16:46:29.362549    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:46:29.362575    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:46:29.365438    4568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:46:29.368193    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.389995    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.803236    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.938108    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.960153    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:46:29.984287    4568 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:46:29.984371    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:46:30.485812    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:46:30.986429    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:46:30.990956    4568 api_server.go:72] duration metric: took 1.006691625s to wait for apiserver process to appear ...
	I0729 16:46:30.990966    4568 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:46:30.990976    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:35.992988    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:35.993016    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:40.993179    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:40.993219    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:45.993588    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:45.993624    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:50.994093    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:50.994150    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:46:55.994849    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:46:55.994891    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:00.995651    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:00.995692    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:05.996622    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:05.996660    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:10.997857    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:10.997890    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:15.999476    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:15.999520    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:21.001522    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:21.001545    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:26.003672    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:26.003713    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:31.005909    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:31.006039    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:31.021018    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:31.021104    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:31.032798    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:31.032873    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:31.045913    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:31.045984    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:31.056560    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:31.056648    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:31.067379    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:31.067454    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:31.078613    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:31.078681    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:31.089285    4568 logs.go:276] 0 containers: []
	W0729 16:47:31.089297    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:31.089359    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:31.099563    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:31.099585    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:31.099590    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:31.112987    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:31.112998    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:31.125018    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:31.125030    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:31.129568    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:31.129580    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:31.233261    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:31.233275    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:31.249195    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:31.249208    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:31.264804    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:31.264816    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:31.276916    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:31.276928    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:31.288670    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:31.288681    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:31.300350    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:31.300359    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:31.311549    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:31.311559    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:31.350673    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:31.350686    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:31.365457    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:31.365471    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:31.383998    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:31.384009    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:31.431042    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:31.431053    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:31.444803    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:31.444813    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:33.971281    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:38.973524    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:38.973713    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:38.991404    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:38.991490    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:39.005694    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:39.005767    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:39.015937    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:39.016009    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:39.026083    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:39.026153    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:39.036983    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:39.037058    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:39.047915    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:39.047984    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:39.058373    4568 logs.go:276] 0 containers: []
	W0729 16:47:39.058384    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:39.058439    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:39.069115    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:39.069132    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:39.069138    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:39.081058    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:39.081069    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:39.093466    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:39.093476    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:39.118447    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:39.118456    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:39.122478    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:39.122484    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:39.139008    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:39.139018    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:39.154485    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:39.154501    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:39.169701    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:39.169712    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:39.208022    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:39.208033    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:39.221982    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:39.221993    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:39.237214    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:39.237225    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:39.249128    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:39.249139    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:39.287718    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:39.287732    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:39.299448    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:39.299459    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:39.336295    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:39.336304    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:39.348085    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:39.348095    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:41.867113    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:46.869419    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:46.869524    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:46.880771    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:46.880846    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:46.891117    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:46.891209    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:46.901802    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:46.901864    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:46.912259    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:46.912337    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:46.922317    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:46.922384    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:46.932584    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:46.932645    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:46.942390    4568 logs.go:276] 0 containers: []
	W0729 16:47:46.942401    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:46.942452    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:46.952933    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:46.952950    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:46.952957    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:46.964284    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:46.964297    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:46.979465    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:46.979477    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:47.016390    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:47.016404    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:47.054079    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:47.054094    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:47.069072    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:47.069083    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:47.080831    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:47.080844    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:47.093624    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:47.093637    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:47.105268    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:47.105280    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:47.130932    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:47.130944    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:47.145127    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:47.145137    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:47.167738    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:47.167749    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:47.186962    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:47.186973    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:47.204387    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:47.204398    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:47.216635    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:47.216647    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:47.255238    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:47.255249    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:49.761405    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:47:54.763606    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:47:54.763800    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:47:54.790681    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:47:54.790766    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:47:54.803855    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:47:54.803933    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:47:54.814603    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:47:54.814674    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:47:54.824834    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:47:54.824909    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:47:54.836218    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:47:54.836290    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:47:54.846550    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:47:54.846620    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:47:54.856199    4568 logs.go:276] 0 containers: []
	W0729 16:47:54.856209    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:47:54.856281    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:47:54.874845    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:47:54.874860    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:47:54.874867    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:47:54.886551    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:47:54.886565    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:47:54.901595    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:47:54.901605    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:47:54.913078    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:47:54.913092    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:47:54.938810    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:47:54.938821    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:47:54.978318    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:47:54.978329    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:47:55.014914    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:47:55.014927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:47:55.028522    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:47:55.028536    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:47:55.047472    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:47:55.047488    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:47:55.059008    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:47:55.059017    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:47:55.070612    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:47:55.070623    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:47:55.105397    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:47:55.105409    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:47:55.120098    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:47:55.120108    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:47:55.141016    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:47:55.141028    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:47:55.159252    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:47:55.159262    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:47:55.171999    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:47:55.172012    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:47:57.677910    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:02.680226    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:02.680409    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:02.706596    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:02.706703    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:02.722054    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:02.722134    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:02.734554    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:02.734616    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:02.746078    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:02.746153    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:02.756834    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:02.756901    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:02.767837    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:02.767907    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:02.777904    4568 logs.go:276] 0 containers: []
	W0729 16:48:02.777916    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:02.777969    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:02.788648    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:02.788666    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:02.788671    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:02.807589    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:02.807603    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:02.825333    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:02.825348    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:02.838045    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:02.838056    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:02.877613    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:02.877621    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:02.891550    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:02.891561    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:02.929040    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:02.929052    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:02.949399    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:02.949411    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:02.954063    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:02.954070    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:02.968358    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:02.968368    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:02.979837    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:02.979848    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:02.991440    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:02.991450    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:03.026805    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:03.026816    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:03.042208    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:03.042222    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:03.053705    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:03.053718    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:03.068365    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:03.068379    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:05.594758    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:10.596927    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:10.597025    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:10.608240    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:10.608320    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:10.619494    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:10.619587    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:10.631475    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:10.631545    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:10.642518    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:10.642586    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:10.652986    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:10.653053    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:10.663374    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:10.663446    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:10.673489    4568 logs.go:276] 0 containers: []
	W0729 16:48:10.673501    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:10.673554    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:10.688706    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:10.688723    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:10.688729    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:10.702902    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:10.702920    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:10.737727    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:10.737737    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:10.776559    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:10.776572    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:10.788912    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:10.788927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:10.808563    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:10.808574    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:10.820914    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:10.820925    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:10.845169    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:10.845179    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:10.886536    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:10.886552    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:10.898217    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:10.898231    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:10.916791    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:10.916805    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:10.921025    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:10.921031    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:10.934607    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:10.934620    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:10.946900    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:10.946914    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:10.960134    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:10.960144    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:10.974467    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:10.974477    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:13.487722    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:18.489218    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:18.489469    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:18.515748    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:18.515845    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:18.530971    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:18.531061    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:18.543504    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:18.543573    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:18.554327    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:18.554409    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:18.564992    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:18.565064    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:18.575744    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:18.575810    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:18.585476    4568 logs.go:276] 0 containers: []
	W0729 16:48:18.585487    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:18.585547    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:18.598406    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:18.598431    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:18.598437    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:18.611252    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:18.611267    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:18.650275    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:18.650287    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:18.654457    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:18.654466    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:18.668262    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:18.668275    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:18.679263    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:18.679275    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:18.691146    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:18.691161    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:18.718451    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:18.718464    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:18.754674    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:18.754689    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:18.792360    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:18.792370    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:18.806624    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:18.806632    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:18.824118    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:18.824133    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:18.842328    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:18.842343    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:18.856623    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:18.856638    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:18.872246    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:18.872256    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:18.885146    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:18.885156    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:21.399965    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:26.402342    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:26.402519    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:26.416036    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:26.416122    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:26.427516    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:26.427584    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:26.438166    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:26.438240    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:26.448707    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:26.448778    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:26.459143    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:26.459211    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:26.469643    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:26.469704    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:26.479768    4568 logs.go:276] 0 containers: []
	W0729 16:48:26.479782    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:26.479842    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:26.490319    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:26.490339    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:26.490345    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:26.531731    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:26.531741    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:26.545584    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:26.545595    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:26.557059    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:26.557070    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:26.561272    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:26.561279    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:26.598045    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:26.598060    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:26.612865    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:26.612876    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:26.627784    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:26.627800    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:26.646558    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:26.646574    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:26.670202    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:26.670211    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:26.684068    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:26.684078    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:26.695759    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:26.695772    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:26.709911    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:26.709922    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:26.747132    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:26.747142    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:26.759369    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:26.759379    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:26.771063    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:26.771074    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:29.288333    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:34.290472    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:34.290619    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:34.307834    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:34.307916    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:34.318375    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:34.318446    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:34.328562    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:34.328635    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:34.339395    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:34.339470    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:34.350145    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:34.350220    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:34.360916    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:34.360986    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:34.371213    4568 logs.go:276] 0 containers: []
	W0729 16:48:34.371227    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:34.371288    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:34.383344    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:34.383361    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:34.383367    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:34.395192    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:34.395206    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:34.406193    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:34.406205    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:34.417200    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:34.417214    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:34.421405    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:34.421412    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:34.444211    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:34.444219    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:34.481150    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:34.481157    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:34.493647    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:34.493661    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:34.505271    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:34.505281    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:34.531028    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:34.531038    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:34.543272    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:34.543286    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:34.582128    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:34.582138    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:34.596027    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:34.596045    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:34.614495    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:34.614506    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:34.629080    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:34.629095    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:34.648083    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:34.648097    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:37.191332    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:42.193624    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:42.193856    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:42.210461    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:42.210546    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:42.222690    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:42.222765    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:42.233075    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:42.233140    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:42.243412    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:42.243488    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:42.254498    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:42.254571    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:42.265464    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:42.265526    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:42.275846    4568 logs.go:276] 0 containers: []
	W0729 16:48:42.275856    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:42.275909    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:42.286248    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:42.286268    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:42.286273    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:42.328450    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:42.328461    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:42.342876    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:42.342890    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:42.354202    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:42.354215    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:42.365709    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:42.365720    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:42.379917    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:42.379927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:42.396825    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:42.396835    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:42.408764    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:42.408776    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:42.447818    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:42.447829    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:42.483094    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:42.483107    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:42.498393    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:42.498406    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:42.509656    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:42.509669    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:42.524355    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:42.524365    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:42.528750    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:42.528758    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:42.540447    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:42.540458    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:42.564972    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:42.564979    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:45.078218    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:50.080517    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:50.080742    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:50.103688    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:50.103782    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:50.119573    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:50.119651    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:50.130818    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:50.130892    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:50.141621    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:50.141699    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:50.152443    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:50.152511    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:50.163124    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:50.163195    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:50.181637    4568 logs.go:276] 0 containers: []
	W0729 16:48:50.181649    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:50.181706    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:50.192102    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:50.192120    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:50.192125    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:50.203575    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:50.203587    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:50.219784    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:50.219795    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:50.237968    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:50.237979    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:50.263406    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:50.263415    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:50.303480    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:50.303496    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:50.307976    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:50.307984    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:50.349048    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:50.349059    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:50.363600    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:50.363612    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:50.374838    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:50.374851    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:50.412361    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:50.412377    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:50.426821    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:50.426834    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:48:50.442518    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:50.442530    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:50.456859    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:50.456875    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:50.470609    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:50.470620    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:50.483986    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:50.483997    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:53.000284    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:48:58.002531    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:48:58.002680    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:48:58.021314    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:48:58.021391    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:48:58.032569    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:48:58.032637    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:48:58.042938    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:48:58.043000    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:48:58.053777    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:48:58.053855    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:48:58.064976    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:48:58.065044    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:48:58.080819    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:48:58.080899    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:48:58.092215    4568 logs.go:276] 0 containers: []
	W0729 16:48:58.092228    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:48:58.092294    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:48:58.103059    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:48:58.103079    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:48:58.103084    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:48:58.114566    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:48:58.114578    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:48:58.139125    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:48:58.139132    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:48:58.150914    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:48:58.150927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:48:58.166742    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:48:58.166752    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:48:58.204423    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:48:58.204432    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:48:58.208442    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:48:58.208448    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:48:58.243555    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:48:58.243568    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:48:58.257462    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:48:58.257473    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:48:58.272135    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:48:58.272149    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:48:58.283068    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:48:58.283080    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:48:58.298143    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:48:58.298155    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:48:58.337032    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:48:58.337046    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:48:58.350984    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:48:58.350997    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:48:58.362565    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:48:58.362578    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:48:58.380258    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:48:58.380272    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:00.894747    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:05.895159    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:05.895303    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:05.909202    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:05.909270    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:05.919318    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:05.919394    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:05.930127    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:05.930197    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:05.945988    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:05.946068    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:05.956631    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:05.956701    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:05.967352    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:05.967431    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:05.977828    4568 logs.go:276] 0 containers: []
	W0729 16:49:05.977841    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:05.977897    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:05.988827    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:05.988843    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:05.988848    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:06.001505    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:06.001517    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:06.019190    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:06.019201    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:06.031266    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:06.031276    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:06.036012    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:06.036020    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:06.069450    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:06.069460    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:06.092306    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:06.092315    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:06.106246    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:06.106257    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:06.117842    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:06.117854    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:06.130380    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:06.130391    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:06.159129    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:06.159141    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:06.197388    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:06.197399    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:06.236325    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:06.236337    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:06.249251    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:06.249261    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:06.263726    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:06.263739    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:06.285027    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:06.285038    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:08.798550    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:13.800839    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:13.801060    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:13.814729    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:13.814805    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:13.827819    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:13.827891    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:13.838069    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:13.838143    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:13.848843    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:13.848919    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:13.858795    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:13.858863    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:13.869409    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:13.869483    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:13.879338    4568 logs.go:276] 0 containers: []
	W0729 16:49:13.879352    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:13.879410    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:13.889785    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:13.889806    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:13.889812    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:13.927097    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:13.927107    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:13.938712    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:13.938724    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:13.953511    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:13.953522    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:13.964673    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:13.964684    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:14.000893    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:14.000903    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:14.004649    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:14.004658    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:14.018103    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:14.018113    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:14.029971    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:14.029982    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:14.065953    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:14.065968    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:14.080332    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:14.080350    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:14.096223    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:14.096234    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:14.113971    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:14.114011    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:14.136834    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:14.136845    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:14.150495    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:14.150510    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:14.164701    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:14.164714    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:16.689206    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:21.691526    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:21.691872    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:21.718480    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:21.718627    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:21.735942    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:21.736022    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:21.755273    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:21.755349    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:21.766031    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:21.766105    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:21.776786    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:21.776859    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:21.788230    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:21.788304    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:21.798999    4568 logs.go:276] 0 containers: []
	W0729 16:49:21.799015    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:21.799073    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:21.813134    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:21.813152    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:21.813157    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:21.827124    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:21.827135    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:21.844507    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:21.844517    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:21.868469    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:21.868491    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:21.881779    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:21.881791    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:21.897287    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:21.897302    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:21.931661    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:21.931671    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:21.969300    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:21.969311    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:21.984020    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:21.984034    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:21.995653    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:21.995663    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:22.007385    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:22.007395    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:22.019519    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:22.019529    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:22.056280    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:22.056292    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:22.060118    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:22.060127    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:22.071658    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:22.071668    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:22.085434    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:22.085445    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:24.602104    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:29.604355    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:29.604477    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:29.624494    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:29.624568    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:29.643806    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:29.643880    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:29.654230    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:29.654304    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:29.665069    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:29.665145    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:29.675593    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:29.675660    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:29.686512    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:29.686579    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:29.700357    4568 logs.go:276] 0 containers: []
	W0729 16:49:29.700374    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:29.700439    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:29.711373    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:29.711390    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:29.711396    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:29.745289    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:29.745303    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:29.760069    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:29.760084    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:29.777342    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:29.777353    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:29.801137    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:29.801144    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:29.840234    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:29.840247    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:29.851864    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:29.851879    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:29.863503    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:29.863514    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:29.875395    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:29.875406    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:29.879584    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:29.879593    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:29.917984    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:29.917994    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:29.932799    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:29.932812    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:29.945307    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:29.945319    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:29.959291    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:29.959304    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:29.973936    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:29.973948    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:29.985790    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:29.985802    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:32.499062    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:37.501267    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:37.501450    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:37.523612    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:37.523693    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:37.535621    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:37.535684    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:37.546623    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:37.546705    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:37.557273    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:37.557345    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:37.567509    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:37.567569    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:37.579438    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:37.579504    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:37.589675    4568 logs.go:276] 0 containers: []
	W0729 16:49:37.589693    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:37.589749    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:37.600139    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:37.600159    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:37.600165    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:37.604486    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:37.604495    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:37.616416    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:37.616427    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:37.627878    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:37.627891    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:37.645499    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:37.645509    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:37.657716    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:37.657729    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:37.681940    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:37.681948    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:37.706627    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:37.706638    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:37.721676    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:37.721688    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:37.733485    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:37.733497    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:37.773348    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:37.773357    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:37.787225    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:37.787237    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:37.825311    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:37.825324    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:37.839888    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:37.839901    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:37.851156    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:37.851169    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:37.886575    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:37.886587    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:40.400990    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:45.403306    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:45.403446    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:45.423667    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:45.423750    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:45.434627    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:45.434697    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:45.445549    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:45.445620    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:45.456005    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:45.456070    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:45.466449    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:45.466521    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:45.477178    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:45.477245    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:45.487502    4568 logs.go:276] 0 containers: []
	W0729 16:49:45.487514    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:45.487572    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:45.498208    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:45.498225    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:45.498231    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:45.512023    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:45.512033    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:45.552172    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:45.552185    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:45.563813    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:45.563828    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:45.578324    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:45.578336    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:45.591072    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:45.591083    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:45.602119    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:45.602132    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:45.617271    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:45.617284    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:45.629216    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:45.629228    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:45.647122    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:45.647133    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:45.658382    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:45.658393    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:45.698074    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:45.698085    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:45.702508    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:45.702515    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:45.735984    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:45.735995    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:45.750456    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:45.750471    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:45.773655    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:45.773670    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:48.287667    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:49:53.289361    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:49:53.289548    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:49:53.305635    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:49:53.305714    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:49:53.317797    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:49:53.317860    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:49:53.328877    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:49:53.328948    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:49:53.339387    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:49:53.339464    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:49:53.350420    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:49:53.350492    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:49:53.365153    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:49:53.365232    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:49:53.384195    4568 logs.go:276] 0 containers: []
	W0729 16:49:53.384210    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:49:53.384274    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:49:53.409779    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:49:53.409799    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:49:53.409805    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:49:53.444664    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:49:53.444679    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:49:53.463611    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:49:53.463623    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:49:53.477854    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:49:53.477864    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:49:53.489803    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:49:53.489815    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:49:53.501588    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:49:53.501598    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:49:53.525318    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:49:53.525325    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:49:53.563182    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:49:53.563195    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:49:53.577798    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:49:53.577808    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:49:53.595108    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:49:53.595120    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:49:53.607428    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:49:53.607442    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:49:53.619792    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:49:53.619806    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:49:53.624116    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:49:53.624123    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:49:53.663064    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:49:53.663071    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:49:53.677468    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:49:53.677480    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:49:53.689548    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:49:53.689562    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:49:56.205062    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:01.207390    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:01.207551    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:01.222782    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:01.222865    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:01.234631    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:01.234700    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:01.245503    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:01.245574    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:01.255885    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:01.255961    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:01.266645    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:01.266717    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:01.277140    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:01.277203    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:01.287897    4568 logs.go:276] 0 containers: []
	W0729 16:50:01.287908    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:01.287961    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:01.298374    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:01.298396    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:01.298402    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:01.337041    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:01.337054    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:01.348412    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:01.348425    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:01.360704    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:01.360717    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:01.375948    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:01.375960    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:01.398932    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:01.398955    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:01.412697    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:01.412706    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:01.427266    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:01.427276    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:01.464471    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:01.464483    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:01.468838    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:01.468847    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:01.482552    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:01.482563    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:01.494762    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:01.494775    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:01.529079    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:01.529094    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:01.541313    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:01.541324    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:01.556117    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:01.556129    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:01.573075    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:01.573086    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:04.087850    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:09.090240    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:09.090401    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:09.104204    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:09.104290    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:09.116859    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:09.116931    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:09.127702    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:09.127799    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:09.139198    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:09.139268    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:09.154501    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:09.154573    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:09.166160    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:09.166230    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:09.178602    4568 logs.go:276] 0 containers: []
	W0729 16:50:09.178612    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:09.178667    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:09.189460    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:09.189477    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:09.189483    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:09.211718    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:09.211734    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:09.250916    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:09.250931    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:09.267344    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:09.267354    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:09.278719    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:09.278731    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:09.293296    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:09.293306    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:09.333039    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:09.333054    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:09.337655    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:09.337662    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:09.373008    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:09.373022    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:09.384817    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:09.384829    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:09.397119    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:09.397130    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:09.408811    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:09.408822    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:09.420715    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:09.420728    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:09.435079    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:09.435089    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:09.449110    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:09.449121    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:09.460582    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:09.460592    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:11.980479    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:16.982795    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:16.982931    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:17.000535    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:17.000616    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:17.014541    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:17.014614    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:17.025665    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:17.025726    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:17.036022    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:17.036091    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:17.046172    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:17.046231    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:17.057085    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:17.057152    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:17.069404    4568 logs.go:276] 0 containers: []
	W0729 16:50:17.069417    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:17.069472    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:17.080076    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:17.080101    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:17.080107    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:17.099141    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:17.099153    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:17.112909    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:17.112919    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:17.130933    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:17.130946    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:17.143230    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:17.143246    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:17.147202    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:17.147208    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:17.158244    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:17.158255    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:17.172543    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:17.172556    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:17.184937    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:17.184947    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:17.223864    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:17.223884    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:17.235881    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:17.235894    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:17.250842    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:17.250852    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:17.262422    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:17.262433    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:17.284201    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:17.284208    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:17.322232    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:17.322243    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:17.339111    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:17.339126    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:19.877432    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:24.879616    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:24.879747    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:50:24.891290    4568 logs.go:276] 2 containers: [43cc4f032dd9 4866a9c899c6]
	I0729 16:50:24.891365    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:50:24.902535    4568 logs.go:276] 2 containers: [08e03c58f130 6b64e4a0a495]
	I0729 16:50:24.902612    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:50:24.913145    4568 logs.go:276] 1 containers: [3ca2f5ec8bea]
	I0729 16:50:24.913210    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:50:24.923471    4568 logs.go:276] 2 containers: [97404cee96ac df1f20080bd7]
	I0729 16:50:24.923547    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:50:24.934143    4568 logs.go:276] 1 containers: [756c5554f214]
	I0729 16:50:24.934218    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:50:24.945244    4568 logs.go:276] 2 containers: [dafe872abaf5 ea007e6b4743]
	I0729 16:50:24.945315    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:50:24.955777    4568 logs.go:276] 0 containers: []
	W0729 16:50:24.955787    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:50:24.955841    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:50:24.965763    4568 logs.go:276] 1 containers: [f6c2b875f393]
	I0729 16:50:24.965782    4568 logs.go:123] Gathering logs for kube-proxy [756c5554f214] ...
	I0729 16:50:24.965788    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756c5554f214"
	I0729 16:50:24.977615    4568 logs.go:123] Gathering logs for kube-controller-manager [dafe872abaf5] ...
	I0729 16:50:24.977625    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafe872abaf5"
	I0729 16:50:24.995796    4568 logs.go:123] Gathering logs for kube-controller-manager [ea007e6b4743] ...
	I0729 16:50:24.995807    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea007e6b4743"
	I0729 16:50:25.007656    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:50:25.007669    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:50:25.030961    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:50:25.030968    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:50:25.067874    4568 logs.go:123] Gathering logs for etcd [6b64e4a0a495] ...
	I0729 16:50:25.067882    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b64e4a0a495"
	I0729 16:50:25.081784    4568 logs.go:123] Gathering logs for kube-apiserver [43cc4f032dd9] ...
	I0729 16:50:25.081793    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43cc4f032dd9"
	I0729 16:50:25.096458    4568 logs.go:123] Gathering logs for etcd [08e03c58f130] ...
	I0729 16:50:25.096469    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e03c58f130"
	I0729 16:50:25.115800    4568 logs.go:123] Gathering logs for coredns [3ca2f5ec8bea] ...
	I0729 16:50:25.115814    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ca2f5ec8bea"
	I0729 16:50:25.129454    4568 logs.go:123] Gathering logs for kube-scheduler [97404cee96ac] ...
	I0729 16:50:25.129467    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97404cee96ac"
	I0729 16:50:25.145689    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:50:25.145700    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:50:25.157333    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:50:25.157343    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:50:25.161252    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:50:25.161260    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:50:25.196822    4568 logs.go:123] Gathering logs for kube-apiserver [4866a9c899c6] ...
	I0729 16:50:25.196834    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4866a9c899c6"
	I0729 16:50:25.240013    4568 logs.go:123] Gathering logs for kube-scheduler [df1f20080bd7] ...
	I0729 16:50:25.240026    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f20080bd7"
	I0729 16:50:25.254892    4568 logs.go:123] Gathering logs for storage-provisioner [f6c2b875f393] ...
	I0729 16:50:25.254904    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6c2b875f393"
	I0729 16:50:27.766572    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:32.768788    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:32.768865    4568 kubeadm.go:597] duration metric: took 4m3.464794292s to restartPrimaryControlPlane
	W0729 16:50:32.768936    4568 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:50:32.768968    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:50:33.775832    4568 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.006867375s)
	I0729 16:50:33.775911    4568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:50:33.781043    4568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:50:33.783938    4568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:50:33.786640    4568 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:50:33.786647    4568 kubeadm.go:157] found existing configuration files:
	
	I0729 16:50:33.786667    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf
	I0729 16:50:33.788940    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:50:33.788966    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:50:33.791956    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf
	I0729 16:50:33.795027    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:50:33.795049    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:50:33.797669    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf
	I0729 16:50:33.800190    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:50:33.800209    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:50:33.803250    4568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf
	I0729 16:50:33.805979    4568 kubeadm.go:163] "https://control-plane.minikube.internal:50508" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50508 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:50:33.806001    4568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:50:33.808513    4568 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:50:33.826439    4568 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:50:33.826473    4568 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:50:33.875306    4568 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:50:33.875364    4568 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:50:33.875425    4568 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:50:33.923819    4568 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:50:33.928027    4568 out.go:204]   - Generating certificates and keys ...
	I0729 16:50:33.928060    4568 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:50:33.928098    4568 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:50:33.928147    4568 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:50:33.928177    4568 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:50:33.928213    4568 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:50:33.928243    4568 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:50:33.928272    4568 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:50:33.928318    4568 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:50:33.928359    4568 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:50:33.928394    4568 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:50:33.928416    4568 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:50:33.928447    4568 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:50:34.015302    4568 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:50:34.254757    4568 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:50:34.433829    4568 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:50:34.534558    4568 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:50:34.563186    4568 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:50:34.563594    4568 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:50:34.563685    4568 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:50:34.653558    4568 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:50:34.657839    4568 out.go:204]   - Booting up control plane ...
	I0729 16:50:34.657886    4568 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:50:34.657929    4568 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:50:34.657961    4568 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:50:34.658010    4568 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:50:34.658097    4568 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:50:39.156729    4568 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502143 seconds
	I0729 16:50:39.156810    4568 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:50:39.161095    4568 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:50:39.688166    4568 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:50:39.688654    4568 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-480000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:50:40.193613    4568 kubeadm.go:310] [bootstrap-token] Using token: 4u8amr.zyu6m2bhslxi0hbj
	I0729 16:50:40.199547    4568 out.go:204]   - Configuring RBAC rules ...
	I0729 16:50:40.199609    4568 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:50:40.199657    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:50:40.201851    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:50:40.203178    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:50:40.204304    4568 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:50:40.205171    4568 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:50:40.208541    4568 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:50:40.353800    4568 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:50:40.597726    4568 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:50:40.598141    4568 kubeadm.go:310] 
	I0729 16:50:40.598182    4568 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:50:40.598187    4568 kubeadm.go:310] 
	I0729 16:50:40.598308    4568 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:50:40.598313    4568 kubeadm.go:310] 
	I0729 16:50:40.598348    4568 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:50:40.598402    4568 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:50:40.598451    4568 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:50:40.598459    4568 kubeadm.go:310] 
	I0729 16:50:40.598499    4568 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:50:40.598508    4568 kubeadm.go:310] 
	I0729 16:50:40.598542    4568 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:50:40.598547    4568 kubeadm.go:310] 
	I0729 16:50:40.598598    4568 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:50:40.598657    4568 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:50:40.598697    4568 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:50:40.598700    4568 kubeadm.go:310] 
	I0729 16:50:40.598747    4568 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:50:40.598788    4568 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:50:40.598792    4568 kubeadm.go:310] 
	I0729 16:50:40.598836    4568 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4u8amr.zyu6m2bhslxi0hbj \
	I0729 16:50:40.598952    4568 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 \
	I0729 16:50:40.598974    4568 kubeadm.go:310] 	--control-plane 
	I0729 16:50:40.598978    4568 kubeadm.go:310] 
	I0729 16:50:40.599151    4568 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:50:40.599179    4568 kubeadm.go:310] 
	I0729 16:50:40.599229    4568 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4u8amr.zyu6m2bhslxi0hbj \
	I0729 16:50:40.599279    4568 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee67fd9c4d612d4862a690faaa2f19934e920987025477254241b5525ba3040 
	I0729 16:50:40.599335    4568 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:50:40.599359    4568 cni.go:84] Creating CNI manager for ""
	I0729 16:50:40.599381    4568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:50:40.602469    4568 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:50:40.610417    4568 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:50:40.613414    4568 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:50:40.617969    4568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:50:40.618011    4568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:50:40.618036    4568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-480000 minikube.k8s.io/updated_at=2024_07_29T16_50_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3 minikube.k8s.io/name=stopped-upgrade-480000 minikube.k8s.io/primary=true
	I0729 16:50:40.621198    4568 ops.go:34] apiserver oom_adj: -16
	I0729 16:50:40.656912    4568 kubeadm.go:1113] duration metric: took 38.935916ms to wait for elevateKubeSystemPrivileges
	I0729 16:50:40.656930    4568 kubeadm.go:394] duration metric: took 4m11.366545375s to StartCluster
	I0729 16:50:40.656940    4568 settings.go:142] acquiring lock: {Name:mk3b097bc26d2850dd7467a616788f5486d088c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:50:40.657024    4568 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:50:40.657463    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/kubeconfig: {Name:mkd561657b833051fbf9227370398307b87f9720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:50:40.657646    4568 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:50:40.657670    4568 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:50:40.657750    4568 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:50:40.657763    4568 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-480000"
	I0729 16:50:40.657750    4568 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-480000"
	I0729 16:50:40.657792    4568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-480000"
	I0729 16:50:40.657796    4568 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-480000"
	W0729 16:50:40.657799    4568 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:50:40.657810    4568 host.go:66] Checking if "stopped-upgrade-480000" exists ...
	I0729 16:50:40.659034    4568 kapi.go:59] client config for stopped-upgrade-480000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/profiles/stopped-upgrade-480000/client.key", CAFile:"/Users/jenkins/minikube-integration/19347-923/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ae0080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:50:40.659147    4568 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-480000"
	W0729 16:50:40.659154    4568 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:50:40.659161    4568 host.go:66] Checking if "stopped-upgrade-480000" exists ...
	I0729 16:50:40.661332    4568 out.go:177] * Verifying Kubernetes components...
	I0729 16:50:40.661640    4568 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:50:40.664548    4568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:50:40.664554    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:50:40.667314    4568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:50:40.671389    4568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:50:40.675264    4568 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:50:40.675271    4568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:50:40.675278    4568 sshutil.go:53] new ssh client: &{IP:localhost Port:50473 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/stopped-upgrade-480000/id_rsa Username:docker}
	I0729 16:50:40.765144    4568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:50:40.770061    4568 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:50:40.770105    4568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:50:40.773747    4568 api_server.go:72] duration metric: took 116.091ms to wait for apiserver process to appear ...
	I0729 16:50:40.773756    4568 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:50:40.773764    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:40.785805    4568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:50:40.852266    4568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:50:45.773927    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:45.773969    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:50.775693    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:50.775721    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:50:55.775957    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:50:55.775995    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:00.776416    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:00.776460    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:05.776899    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:05.776941    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:10.777449    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:10.777525    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:51:11.160965    4568 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:51:11.165358    4568 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:51:11.174306    4568 addons.go:510] duration metric: took 30.517069292s for enable addons: enabled=[storage-provisioner]
	I0729 16:51:15.778179    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:15.778207    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:20.779058    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:20.779095    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:25.780234    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:25.780260    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:30.780746    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:30.780766    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:35.782397    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:35.782422    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:40.784541    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:40.784709    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:51:40.796940    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:51:40.797023    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:51:40.807567    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:51:40.807645    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:51:40.817923    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:51:40.817990    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:51:40.827879    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:51:40.827952    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:51:40.838409    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:51:40.838475    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:51:40.849030    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:51:40.849105    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:51:40.859468    4568 logs.go:276] 0 containers: []
	W0729 16:51:40.859485    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:51:40.859541    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:51:40.870275    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:51:40.870290    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:51:40.870295    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:51:40.874588    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:51:40.874599    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:51:40.909735    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:51:40.909749    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:51:40.929080    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:51:40.929090    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:51:40.940325    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:51:40.940336    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:51:40.953073    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:51:40.953083    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:51:40.978158    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:51:40.978168    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:51:40.989892    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:51:40.989902    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:51:41.026563    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:51:41.026570    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:51:41.040244    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:51:41.040259    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:51:41.051417    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:51:41.051426    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:51:41.072988    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:51:41.073000    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:51:41.089779    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:51:41.089788    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:51:43.603934    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:48.606217    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:48.606617    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:51:48.644018    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:51:48.644140    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:51:48.665628    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:51:48.665706    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:51:48.681381    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:51:48.681481    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:51:48.693887    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:51:48.693958    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:51:48.704859    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:51:48.704924    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:51:48.715458    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:51:48.715518    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:51:48.725893    4568 logs.go:276] 0 containers: []
	W0729 16:51:48.725903    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:51:48.725962    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:51:48.736634    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:51:48.736651    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:51:48.736657    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:51:48.761332    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:51:48.761340    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:51:48.772671    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:51:48.772681    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:51:48.808568    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:51:48.808579    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:51:48.845134    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:51:48.845146    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:51:48.859681    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:51:48.859693    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:51:48.885550    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:51:48.885563    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:51:48.898471    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:51:48.898482    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:51:48.909942    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:51:48.909954    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:51:48.914170    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:51:48.914176    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:51:48.929151    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:51:48.929167    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:51:48.940363    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:51:48.940374    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:51:48.951479    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:51:48.951489    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:51:51.467769    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:51:56.470108    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:51:56.470293    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:51:56.502207    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:51:56.502314    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:51:56.519601    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:51:56.519675    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:51:56.532439    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:51:56.532505    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:51:56.543786    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:51:56.543856    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:51:56.553580    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:51:56.553647    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:51:56.563995    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:51:56.564059    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:51:56.573614    4568 logs.go:276] 0 containers: []
	W0729 16:51:56.573626    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:51:56.573673    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:51:56.583905    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:51:56.583918    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:51:56.583923    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:51:56.621544    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:51:56.621556    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:51:56.633002    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:51:56.633015    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:51:56.648480    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:51:56.648491    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:51:56.659728    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:51:56.659741    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:51:56.693751    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:51:56.693758    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:51:56.712411    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:51:56.712422    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:51:56.726066    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:51:56.726077    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:51:56.737046    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:51:56.737057    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:51:56.748460    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:51:56.748474    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:51:56.765490    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:51:56.765501    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:51:56.776553    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:51:56.776563    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:51:56.800397    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:51:56.800406    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:51:59.304979    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:04.306199    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:04.306484    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:04.338501    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:04.338616    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:04.357586    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:04.357679    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:04.372386    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:52:04.372467    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:04.384795    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:04.384860    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:04.395570    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:04.395638    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:04.405866    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:04.405925    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:04.427641    4568 logs.go:276] 0 containers: []
	W0729 16:52:04.427656    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:04.427719    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:04.438102    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:04.438118    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:04.438123    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:04.450367    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:04.450378    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:04.462556    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:04.462567    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:04.466844    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:04.466853    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:04.481904    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:04.481915    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:04.497559    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:04.497575    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:04.510509    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:04.510519    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:04.521781    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:04.521793    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:04.540275    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:04.540284    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:04.574210    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:04.574226    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:04.611878    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:04.611894    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:04.630044    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:04.630054    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:52:04.654754    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:04.654761    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:07.167987    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:12.170646    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:12.171109    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:12.210785    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:12.210922    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:12.232139    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:12.232243    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:12.249846    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:52:12.249922    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:12.262481    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:12.262543    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:12.283206    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:12.283285    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:12.293866    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:12.293924    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:12.304174    4568 logs.go:276] 0 containers: []
	W0729 16:52:12.304186    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:12.304241    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:12.314427    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:12.314439    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:12.314443    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:12.348483    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:12.348491    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:12.362846    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:12.362857    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:12.374751    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:12.374764    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:12.397030    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:12.397040    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:52:12.421989    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:12.421998    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:12.441276    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:12.441287    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:12.454492    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:12.454505    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:12.466747    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:12.466760    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:12.470901    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:12.470910    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:12.505459    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:12.505474    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:12.519415    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:12.519427    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:12.530531    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:12.530544    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:15.048454    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:20.050756    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:20.051106    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:20.092007    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:20.092156    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:20.114450    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:20.114554    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:20.129521    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:52:20.129596    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:20.143413    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:20.143489    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:20.153869    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:20.153937    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:20.164668    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:20.164727    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:20.174802    4568 logs.go:276] 0 containers: []
	W0729 16:52:20.174813    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:20.174863    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:20.185647    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:20.185661    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:20.185667    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:20.202235    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:20.202249    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:52:20.225275    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:20.225290    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:20.230064    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:20.230074    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:20.265109    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:20.265123    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:20.276968    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:20.276980    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:20.292487    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:20.292498    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:20.304673    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:20.304686    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:20.334432    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:20.334443    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:20.346061    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:20.346071    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:20.357148    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:20.357160    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:20.392147    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:20.392156    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:20.405786    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:20.405796    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:22.924470    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:27.927270    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:27.927798    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:27.969368    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:27.969497    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:27.990796    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:27.990905    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:28.005962    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:52:28.006036    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:28.020264    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:28.020345    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:28.035879    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:28.035942    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:28.046338    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:28.046401    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:28.056496    4568 logs.go:276] 0 containers: []
	W0729 16:52:28.056507    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:28.056554    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:28.066815    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:28.066831    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:28.066837    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:28.078278    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:28.078290    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:28.111963    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:28.111973    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:28.115919    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:28.115928    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:28.160313    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:28.160325    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:28.175188    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:28.175201    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:28.187268    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:28.187281    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:28.199318    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:28.199333    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:28.219160    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:28.219170    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:52:28.244958    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:28.244969    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:28.260026    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:28.260036    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:28.276178    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:28.276191    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:28.291299    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:28.291308    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:30.804132    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:35.806170    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:35.806483    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:35.847294    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:35.847408    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:35.874799    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:35.874882    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:35.891438    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:52:35.891506    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:35.902766    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:35.902833    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:35.913104    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:35.913169    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:35.923839    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:35.923904    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:35.933826    4568 logs.go:276] 0 containers: []
	W0729 16:52:35.933838    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:35.933895    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:35.944125    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:35.944142    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:35.944148    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:35.957534    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:35.957546    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:35.971157    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:35.971167    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:35.983022    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:35.983032    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:35.997539    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:35.997551    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:36.009499    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:36.009508    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:36.021028    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:36.021040    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:52:36.046277    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:36.046284    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:36.081577    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:36.081584    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:36.085767    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:36.085776    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:36.121887    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:36.121901    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:36.133607    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:36.133618    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:36.151353    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:36.151363    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:38.664186    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:43.666926    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:43.667409    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:43.707212    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:43.707340    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:43.728919    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:43.729039    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:43.746607    4568 logs.go:276] 2 containers: [383e0f86e8cc c43dcd466d2b]
	I0729 16:52:43.746691    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:43.758976    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:43.759052    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:43.769383    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:43.769461    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:43.780550    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:43.780619    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:43.790387    4568 logs.go:276] 0 containers: []
	W0729 16:52:43.790399    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:43.790453    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:43.800877    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:43.800893    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:43.800898    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:43.815947    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:43.815960    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:43.827830    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:43.827843    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:43.842911    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:43.842922    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:43.860171    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:43.860184    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:43.871809    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:43.871821    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:43.883736    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:43.883748    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:52:43.908278    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:43.908287    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:43.943472    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:43.943480    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:43.947441    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:43.947450    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:43.982198    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:43.982209    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:43.996230    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:43.996243    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:44.007667    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:44.007678    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:46.520595    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:51.522862    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:51.522977    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:51.550162    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:51.550267    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:51.579895    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:51.579967    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:51.603468    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:52:51.603539    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:51.631857    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:51.631926    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:51.654242    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:51.654313    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:51.666989    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:51.667055    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:51.677369    4568 logs.go:276] 0 containers: []
	W0729 16:52:51.677381    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:51.677441    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:51.688096    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:51.688111    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:51.688115    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:51.702426    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:51.702436    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:51.714206    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:51.714215    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:51.725795    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:51.725809    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:51.761480    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:52:51.761495    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:52:51.772870    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:51.772883    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:51.792055    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:51.792065    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:51.809917    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:51.809928    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:51.821196    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:51.821207    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:51.836125    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:52:51.836134    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:52:51.851642    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:51.851657    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:52:51.875719    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:51.875726    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:51.887417    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:51.887430    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:51.921181    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:51.921187    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:51.925391    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:51.925398    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:54.436730    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:52:59.439454    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:52:59.439892    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:52:59.478456    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:52:59.478583    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:52:59.498641    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:52:59.498742    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:52:59.518965    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:52:59.519053    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:52:59.534237    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:52:59.534312    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:52:59.546359    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:52:59.546436    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:52:59.557029    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:52:59.557103    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:52:59.567318    4568 logs.go:276] 0 containers: []
	W0729 16:52:59.567329    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:52:59.567389    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:52:59.578240    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:52:59.578258    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:52:59.578264    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:52:59.582519    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:52:59.582525    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:52:59.600480    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:52:59.600492    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:52:59.615151    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:52:59.615162    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:52:59.626707    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:52:59.626718    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:52:59.662371    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:52:59.662379    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:52:59.683050    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:52:59.683062    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:52:59.698227    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:52:59.698240    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:52:59.709350    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:52:59.709362    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:52:59.720345    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:52:59.720358    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:52:59.754891    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:52:59.754905    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:52:59.768700    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:52:59.768712    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:52:59.780184    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:52:59.780199    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:52:59.791988    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:52:59.791999    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:52:59.803679    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:52:59.803689    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:02.330810    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:53:07.333152    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:53:07.333234    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:53:07.344246    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:53:07.344305    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:53:07.354540    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:53:07.354601    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:53:07.366260    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:53:07.366317    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:53:07.376969    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:53:07.377033    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:53:07.387658    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:53:07.387716    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:53:07.404004    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:53:07.404069    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:53:07.414682    4568 logs.go:276] 0 containers: []
	W0729 16:53:07.414694    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:53:07.414749    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:53:07.424650    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:53:07.424666    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:53:07.424670    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:53:07.438318    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:53:07.438326    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:53:07.450977    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:53:07.450988    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:53:07.467066    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:53:07.467078    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:53:07.478472    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:53:07.478485    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:53:07.482611    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:53:07.482620    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:53:07.516488    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:53:07.516502    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:53:07.534076    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:53:07.534085    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:07.559154    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:53:07.559161    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:53:07.593114    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:53:07.593120    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:53:07.604303    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:53:07.604313    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:53:07.622322    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:53:07.622332    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:53:07.637131    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:53:07.637146    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:53:07.655837    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:53:07.655851    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:53:07.667083    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:53:07.667097    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:53:10.182477    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:53:15.185160    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:53:15.185642    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:53:15.223797    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:53:15.223918    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:53:15.249373    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:53:15.249460    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:53:15.266731    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:53:15.266796    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:53:15.278054    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:53:15.278119    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:53:15.288983    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:53:15.289045    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:53:15.299952    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:53:15.300017    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:53:15.317810    4568 logs.go:276] 0 containers: []
	W0729 16:53:15.317821    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:53:15.317873    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:53:15.328784    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:53:15.328801    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:53:15.328806    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:53:15.346911    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:53:15.346924    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:53:15.383342    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:53:15.383351    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:53:15.397667    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:53:15.397680    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:53:15.409926    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:53:15.409941    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:53:15.444676    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:53:15.444686    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:53:15.461760    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:53:15.461773    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:53:15.473345    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:53:15.473357    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:53:15.485672    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:53:15.485682    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:53:15.504117    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:53:15.504127    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:53:15.515846    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:53:15.515859    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:53:15.528165    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:53:15.528176    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:15.554231    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:53:15.554239    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:53:15.558492    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:53:15.558500    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:53:15.570585    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:53:15.570600    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:53:18.083817    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:53:23.086024    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:53:23.086400    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:53:23.123647    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:53:23.123768    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:53:23.143413    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:53:23.143484    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:53:23.155913    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:53:23.155994    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:53:23.166556    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:53:23.166614    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:53:23.177445    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:53:23.177503    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:53:23.187970    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:53:23.188035    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:53:23.199806    4568 logs.go:276] 0 containers: []
	W0729 16:53:23.199819    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:53:23.199871    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:53:23.210472    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:53:23.210489    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:53:23.210494    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:53:23.227779    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:53:23.227789    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:53:23.261731    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:53:23.261742    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:53:23.276370    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:53:23.276380    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:53:23.293486    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:53:23.293497    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:53:23.305478    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:53:23.305491    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:53:23.341157    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:53:23.341164    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:53:23.352559    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:53:23.352569    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:53:23.366912    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:53:23.366921    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:53:23.378079    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:53:23.378089    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:53:23.390131    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:53:23.390140    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:53:23.405448    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:53:23.405458    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:53:23.409650    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:53:23.409657    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:53:23.421044    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:53:23.421056    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:53:23.432733    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:53:23.432744    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:25.959829    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:53:30.960989    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:53:30.961059    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:53:30.980586    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:53:30.980638    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:53:30.991409    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:53:30.991468    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:53:31.003034    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:53:31.003093    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:53:31.015604    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:53:31.015687    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:53:31.026824    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:53:31.026872    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:53:31.036974    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:53:31.037039    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:53:31.048785    4568 logs.go:276] 0 containers: []
	W0729 16:53:31.048795    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:53:31.048845    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:53:31.060751    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:53:31.060769    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:53:31.060774    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:53:31.099945    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:53:31.099957    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:53:31.116720    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:53:31.116729    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:53:31.129504    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:53:31.129516    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:53:31.147372    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:53:31.147382    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:31.172994    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:53:31.173013    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:53:31.211238    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:53:31.211250    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:53:31.222838    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:53:31.222848    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:53:31.234394    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:53:31.234406    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:53:31.246360    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:53:31.246370    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:53:31.258401    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:53:31.258409    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:53:31.272917    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:53:31.272927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:53:31.284386    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:53:31.284398    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:53:31.296428    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:53:31.296441    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:53:31.301319    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:53:31.301331    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:53:33.825545    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:53:38.827919    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:53:38.828001    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:53:38.839129    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:53:38.839188    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:53:38.850222    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:53:38.850285    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:53:38.861202    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:53:38.861267    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:53:38.872775    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:53:38.872835    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:53:38.883510    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:53:38.883571    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:53:38.894342    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:53:38.894401    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:53:38.905624    4568 logs.go:276] 0 containers: []
	W0729 16:53:38.905637    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:53:38.905684    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:53:38.916972    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:53:38.916986    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:53:38.916992    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:53:38.931357    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:53:38.931367    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:53:38.942984    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:53:38.942993    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:53:38.956367    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:53:38.956378    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:53:38.967507    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:53:38.967518    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:53:39.001792    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:53:39.001799    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:53:39.016171    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:53:39.016180    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:53:39.029499    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:53:39.029509    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:53:39.041225    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:53:39.041233    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:53:39.045635    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:53:39.045644    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:53:39.056978    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:53:39.056988    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:53:39.068678    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:53:39.068688    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:39.093450    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:53:39.093456    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:53:39.126228    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:53:39.126239    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:53:39.144918    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:53:39.144929    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:53:41.664601    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:53:46.666849    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:53:46.667335    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:53:46.706623    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:53:46.706757    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:53:46.729127    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:53:46.729221    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:53:46.744184    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:53:46.744265    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:53:46.755658    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:53:46.755732    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:53:46.767136    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:53:46.767201    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:53:46.777833    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:53:46.777898    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:53:46.787754    4568 logs.go:276] 0 containers: []
	W0729 16:53:46.787764    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:53:46.787815    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:53:46.798038    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:53:46.798054    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:53:46.798058    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:53:46.813294    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:53:46.813304    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:53:46.824850    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:53:46.824864    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:53:46.836906    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:53:46.836919    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:53:46.856843    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:53:46.856854    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:46.882057    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:53:46.882069    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:53:46.918094    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:53:46.918108    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:53:46.935301    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:53:46.935313    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:53:46.951594    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:53:46.951609    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:53:46.963504    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:53:46.963514    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:53:46.975563    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:53:46.975575    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:53:46.987673    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:53:46.987684    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:53:46.999438    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:53:46.999451    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:53:47.035339    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:53:47.035347    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:53:47.039990    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:53:47.039998    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:53:49.559539    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:53:54.562122    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:53:54.562224    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:53:54.573171    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:53:54.573233    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:53:54.585016    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:53:54.585078    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:53:54.597924    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:53:54.597989    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:53:54.609261    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:53:54.609316    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:53:54.619510    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:53:54.619573    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:53:54.630768    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:53:54.630832    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:53:54.642517    4568 logs.go:276] 0 containers: []
	W0729 16:53:54.642530    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:53:54.642575    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:53:54.656924    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:53:54.656937    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:53:54.656942    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:53:54.668963    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:53:54.668975    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:53:54.682416    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:53:54.682430    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:53:54.698899    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:53:54.698909    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:53:54.717024    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:53:54.717039    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:53:54.730836    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:53:54.730850    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:53:54.745192    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:53:54.745203    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:53:54.770387    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:53:54.770400    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:53:54.807890    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:53:54.807906    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:53:54.821089    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:53:54.821098    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:53:54.840172    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:53:54.840182    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:53:54.877812    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:53:54.877825    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:53:54.890774    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:53:54.890785    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:53:54.903827    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:53:54.903839    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:53:54.923391    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:53:54.923402    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:53:57.429852    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:54:02.432594    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:54:02.432968    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:54:02.467075    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:54:02.467201    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:54:02.486325    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:54:02.486407    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:54:02.500202    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:54:02.500281    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:54:02.511640    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:54:02.511704    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:54:02.523208    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:54:02.523279    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:54:02.533870    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:54:02.533938    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:54:02.543716    4568 logs.go:276] 0 containers: []
	W0729 16:54:02.543730    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:54:02.543784    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:54:02.554668    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:54:02.554687    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:54:02.554694    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:54:02.591196    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:54:02.591206    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:54:02.603774    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:54:02.603789    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:54:02.608215    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:54:02.608224    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:54:02.622773    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:54:02.622783    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:54:02.634932    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:54:02.634945    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:54:02.654747    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:54:02.654768    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:54:02.671861    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:54:02.671880    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:54:02.685389    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:54:02.685403    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:54:02.699611    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:54:02.699639    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:54:02.717250    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:54:02.717267    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:54:02.743960    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:54:02.743977    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:54:02.784557    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:54:02.784576    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:54:02.796802    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:54:02.796812    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:54:02.809155    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:54:02.809166    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:54:05.322760    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:54:10.322520    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:54:10.322989    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:54:10.362333    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:54:10.362451    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:54:10.387714    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:54:10.387809    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:54:10.401691    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:54:10.401772    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:54:10.415048    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:54:10.415123    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:54:10.426195    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:54:10.426268    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:54:10.436911    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:54:10.436980    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:54:10.448849    4568 logs.go:276] 0 containers: []
	W0729 16:54:10.448862    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:54:10.448925    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:54:10.462989    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:54:10.463009    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:54:10.463015    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:54:10.499019    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:54:10.499026    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:54:10.515594    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:54:10.515605    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:54:10.527686    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:54:10.527696    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:54:10.544285    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:54:10.544295    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:54:10.561608    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:54:10.561618    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:54:10.574296    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:54:10.574308    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:54:10.586051    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:54:10.586061    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:54:10.597287    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:54:10.597301    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:54:10.609411    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:54:10.609420    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:54:10.632698    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:54:10.632707    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:54:10.637131    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:54:10.637139    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:54:10.670646    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:54:10.670657    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:54:10.685352    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:54:10.685363    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:54:10.697441    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:54:10.697453    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:54:13.208111    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:54:18.205080    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:54:18.205178    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:54:18.217000    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:54:18.217087    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:54:18.228576    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:54:18.228656    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:54:18.239805    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:54:18.239864    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:54:18.251004    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:54:18.251060    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:54:18.264353    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:54:18.264409    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:54:18.277667    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:54:18.277720    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:54:18.288507    4568 logs.go:276] 0 containers: []
	W0729 16:54:18.288521    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:54:18.288567    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:54:18.299894    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:54:18.299910    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:54:18.299915    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:54:18.313483    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:54:18.313495    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:54:18.332479    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:54:18.332492    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:54:18.348382    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:54:18.348394    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:54:18.366117    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:54:18.366125    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:54:18.378307    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:54:18.378319    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:54:18.403486    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:54:18.403503    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:54:18.407950    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:54:18.407958    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:54:18.420032    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:54:18.420044    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:54:18.432741    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:54:18.432753    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:54:18.452430    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:54:18.452441    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:54:18.464162    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:54:18.464173    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:54:18.476375    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:54:18.476389    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:54:18.512060    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:54:18.512080    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:54:18.551308    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:54:18.551319    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:54:21.070792    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:54:26.069365    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:54:26.069846    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:54:26.107774    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:54:26.107906    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:54:26.129585    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:54:26.129687    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:54:26.145913    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:54:26.145999    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:54:26.158640    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:54:26.158713    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:54:26.179098    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:54:26.179165    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:54:26.189754    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:54:26.189817    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:54:26.200086    4568 logs.go:276] 0 containers: []
	W0729 16:54:26.200098    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:54:26.200156    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:54:26.210188    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:54:26.210206    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:54:26.210212    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:54:26.244025    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:54:26.244034    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:54:26.261191    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:54:26.261204    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:54:26.273346    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:54:26.273360    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:54:26.291918    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:54:26.291932    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:54:26.306105    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:54:26.306115    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:54:26.319761    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:54:26.319773    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:54:26.334808    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:54:26.334816    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:54:26.348862    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:54:26.348871    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:54:26.360071    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:54:26.360083    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:54:26.371293    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:54:26.371306    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:54:26.393967    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:54:26.393975    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:54:26.406867    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:54:26.406880    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:54:26.443822    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:54:26.443835    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:54:26.455837    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:54:26.455850    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:54:28.968129    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:54:33.969273    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:54:33.969700    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:54:34.006159    4568 logs.go:276] 1 containers: [18e6d078758a]
	I0729 16:54:34.006284    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:54:34.028222    4568 logs.go:276] 1 containers: [f26471d1167c]
	I0729 16:54:34.028311    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:54:34.043421    4568 logs.go:276] 4 containers: [faf4b0b4bb4c 29b060eb9bb5 383e0f86e8cc c43dcd466d2b]
	I0729 16:54:34.043500    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:54:34.055881    4568 logs.go:276] 1 containers: [7da3938c3fa5]
	I0729 16:54:34.055942    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:54:34.068031    4568 logs.go:276] 1 containers: [2ff0c1bd45d7]
	I0729 16:54:34.068105    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:54:34.079062    4568 logs.go:276] 1 containers: [4c0be1c50f32]
	I0729 16:54:34.079124    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:54:34.089509    4568 logs.go:276] 0 containers: []
	W0729 16:54:34.089519    4568 logs.go:278] No container was found matching "kindnet"
	I0729 16:54:34.089570    4568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:54:34.100267    4568 logs.go:276] 1 containers: [d5847905d341]
	I0729 16:54:34.100284    4568 logs.go:123] Gathering logs for dmesg ...
	I0729 16:54:34.100289    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:54:34.104449    4568 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:54:34.104458    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:54:34.139912    4568 logs.go:123] Gathering logs for kube-apiserver [18e6d078758a] ...
	I0729 16:54:34.139927    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e6d078758a"
	I0729 16:54:34.154596    4568 logs.go:123] Gathering logs for etcd [f26471d1167c] ...
	I0729 16:54:34.154608    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26471d1167c"
	I0729 16:54:34.168925    4568 logs.go:123] Gathering logs for kube-scheduler [7da3938c3fa5] ...
	I0729 16:54:34.168937    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da3938c3fa5"
	I0729 16:54:34.184325    4568 logs.go:123] Gathering logs for kube-proxy [2ff0c1bd45d7] ...
	I0729 16:54:34.184335    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0c1bd45d7"
	I0729 16:54:34.196252    4568 logs.go:123] Gathering logs for Docker ...
	I0729 16:54:34.196261    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:54:34.218578    4568 logs.go:123] Gathering logs for container status ...
	I0729 16:54:34.218586    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:54:34.230024    4568 logs.go:123] Gathering logs for coredns [c43dcd466d2b] ...
	I0729 16:54:34.230034    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c43dcd466d2b"
	I0729 16:54:34.246586    4568 logs.go:123] Gathering logs for storage-provisioner [d5847905d341] ...
	I0729 16:54:34.246600    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5847905d341"
	I0729 16:54:34.258796    4568 logs.go:123] Gathering logs for kubelet ...
	I0729 16:54:34.258805    4568 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:54:34.295399    4568 logs.go:123] Gathering logs for coredns [faf4b0b4bb4c] ...
	I0729 16:54:34.295408    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf4b0b4bb4c"
	I0729 16:54:34.306588    4568 logs.go:123] Gathering logs for coredns [29b060eb9bb5] ...
	I0729 16:54:34.306600    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b060eb9bb5"
	I0729 16:54:34.319437    4568 logs.go:123] Gathering logs for coredns [383e0f86e8cc] ...
	I0729 16:54:34.319450    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383e0f86e8cc"
	I0729 16:54:34.331138    4568 logs.go:123] Gathering logs for kube-controller-manager [4c0be1c50f32] ...
	I0729 16:54:34.331149    4568 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0be1c50f32"
	I0729 16:54:36.850628    4568 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:54:41.850602    4568 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:54:41.853871    4568 out.go:177] 
	W0729 16:54:41.857902    4568 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 16:54:41.857914    4568 out.go:239] * 
	* 
	W0729 16:54:41.858393    4568 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:54:41.872820    4568 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-480000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (571.50s)

                                                
                                    
x
+
TestPause/serial/Start (9.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-649000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0729 16:51:44.974884    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-649000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.799044084s)

                                                
                                                
-- stdout --
	* [pause-649000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-649000" primary control-plane node in "pause-649000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-649000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-649000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-649000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-649000 -n pause-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-649000 -n pause-649000: exit status 7 (60.626666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 
E0729 16:52:01.304517    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 : exit status 80 (9.951926083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-934000" primary control-plane node in "NoKubernetes-934000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-934000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (63.359542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 : exit status 80 (5.252638s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-934000
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (48.400459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247648667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-934000
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (67.071958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 : exit status 80 (5.281438584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-934000
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (49.739042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.836196s)

                                                
                                                
-- stdout --
	* [kindnet-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-600000" primary control-plane node in "kindnet-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:52:54.864351    4911 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:52:54.864471    4911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:52:54.864475    4911 out.go:304] Setting ErrFile to fd 2...
	I0729 16:52:54.864477    4911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:52:54.864616    4911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:52:54.865757    4911 out.go:298] Setting JSON to false
	I0729 16:52:54.882278    4911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3137,"bootTime":1722294037,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:52:54.882368    4911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:52:54.890773    4911 out.go:177] * [kindnet-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:52:54.897898    4911 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:52:54.897951    4911 notify.go:220] Checking for updates...
	I0729 16:52:54.904960    4911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:52:54.907873    4911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:52:54.910914    4911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:52:54.913986    4911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:52:54.915373    4911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:52:54.918265    4911 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:52:54.918331    4911 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:52:54.918369    4911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:52:54.922900    4911 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:52:54.927912    4911 start.go:297] selected driver: qemu2
	I0729 16:52:54.927923    4911 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:52:54.927932    4911 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:52:54.930262    4911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:52:54.932902    4911 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:52:54.936004    4911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:52:54.936020    4911 cni.go:84] Creating CNI manager for "kindnet"
	I0729 16:52:54.936024    4911 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 16:52:54.936053    4911 start.go:340] cluster config:
	{Name:kindnet-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:52:54.939722    4911 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:52:54.946923    4911 out.go:177] * Starting "kindnet-600000" primary control-plane node in "kindnet-600000" cluster
	I0729 16:52:54.950879    4911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:52:54.950892    4911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:52:54.950901    4911 cache.go:56] Caching tarball of preloaded images
	I0729 16:52:54.950956    4911 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:52:54.950962    4911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:52:54.951018    4911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/kindnet-600000/config.json ...
	I0729 16:52:54.951028    4911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/kindnet-600000/config.json: {Name:mk950cea2801441b0436a991140961906c6bb36b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:52:54.951254    4911 start.go:360] acquireMachinesLock for kindnet-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:52:54.951285    4911 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "kindnet-600000"
	I0729 16:52:54.951296    4911 start.go:93] Provisioning new machine with config: &{Name:kindnet-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:52:54.951326    4911 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:52:54.957840    4911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:52:54.974123    4911 start.go:159] libmachine.API.Create for "kindnet-600000" (driver="qemu2")
	I0729 16:52:54.974153    4911 client.go:168] LocalClient.Create starting
	I0729 16:52:54.974214    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:52:54.974249    4911 main.go:141] libmachine: Decoding PEM data...
	I0729 16:52:54.974270    4911 main.go:141] libmachine: Parsing certificate...
	I0729 16:52:54.974311    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:52:54.974334    4911 main.go:141] libmachine: Decoding PEM data...
	I0729 16:52:54.974342    4911 main.go:141] libmachine: Parsing certificate...
	I0729 16:52:54.974799    4911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:52:55.127171    4911 main.go:141] libmachine: Creating SSH key...
	I0729 16:52:55.256834    4911 main.go:141] libmachine: Creating Disk image...
	I0729 16:52:55.256840    4911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:52:55.257016    4911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2
	I0729 16:52:55.266387    4911 main.go:141] libmachine: STDOUT: 
	I0729 16:52:55.266406    4911 main.go:141] libmachine: STDERR: 
	I0729 16:52:55.266454    4911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2 +20000M
	I0729 16:52:55.274373    4911 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:52:55.274391    4911 main.go:141] libmachine: STDERR: 
	I0729 16:52:55.274402    4911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2
	I0729 16:52:55.274409    4911 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:52:55.274419    4911 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:52:55.274449    4911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:63:1e:8b:72:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2
	I0729 16:52:55.276061    4911 main.go:141] libmachine: STDOUT: 
	I0729 16:52:55.276077    4911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:52:55.276097    4911 client.go:171] duration metric: took 301.94475ms to LocalClient.Create
	I0729 16:52:57.278318    4911 start.go:128] duration metric: took 2.326995291s to createHost
	I0729 16:52:57.278402    4911 start.go:83] releasing machines lock for "kindnet-600000", held for 2.327141125s
	W0729 16:52:57.278475    4911 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:52:57.285799    4911 out.go:177] * Deleting "kindnet-600000" in qemu2 ...
	W0729 16:52:57.313863    4911 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:52:57.313891    4911 start.go:729] Will try again in 5 seconds ...
	I0729 16:53:02.316055    4911 start.go:360] acquireMachinesLock for kindnet-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:02.316666    4911 start.go:364] duration metric: took 474.125µs to acquireMachinesLock for "kindnet-600000"
	I0729 16:53:02.316746    4911 start.go:93] Provisioning new machine with config: &{Name:kindnet-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:02.317070    4911 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:02.327592    4911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:02.372559    4911 start.go:159] libmachine.API.Create for "kindnet-600000" (driver="qemu2")
	I0729 16:53:02.372610    4911 client.go:168] LocalClient.Create starting
	I0729 16:53:02.372728    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:02.372800    4911 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:02.372818    4911 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:02.372880    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:02.372925    4911 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:02.372947    4911 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:02.373588    4911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:02.534285    4911 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:02.614252    4911 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:02.614261    4911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:02.614448    4911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2
	I0729 16:53:02.624069    4911 main.go:141] libmachine: STDOUT: 
	I0729 16:53:02.624090    4911 main.go:141] libmachine: STDERR: 
	I0729 16:53:02.624141    4911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2 +20000M
	I0729 16:53:02.632305    4911 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:02.632323    4911 main.go:141] libmachine: STDERR: 
	I0729 16:53:02.632332    4911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2
	I0729 16:53:02.632337    4911 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:02.632345    4911 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:02.632380    4911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ce:da:2f:3a:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kindnet-600000/disk.qcow2
	I0729 16:53:02.634105    4911 main.go:141] libmachine: STDOUT: 
	I0729 16:53:02.634121    4911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:02.634134    4911 client.go:171] duration metric: took 261.523708ms to LocalClient.Create
	I0729 16:53:04.636320    4911 start.go:128] duration metric: took 2.319225417s to createHost
	I0729 16:53:04.636400    4911 start.go:83] releasing machines lock for "kindnet-600000", held for 2.319741792s
	W0729 16:53:04.636754    4911 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:04.646218    4911 out.go:177] 
	W0729 16:53:04.651507    4911 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:53:04.651546    4911 out.go:239] * 
	* 
	W0729 16:53:04.652916    4911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:53:04.663381    4911 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.933138875s)

                                                
                                                
-- stdout --
	* [auto-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-600000" primary control-plane node in "auto-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:06.896480    5024 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:06.896620    5024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:06.896623    5024 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:06.896626    5024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:06.896768    5024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:53:06.898055    5024 out.go:298] Setting JSON to false
	I0729 16:53:06.914624    5024 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3149,"bootTime":1722294037,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:53:06.914689    5024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:53:06.921707    5024 out.go:177] * [auto-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:53:06.929648    5024 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:53:06.929704    5024 notify.go:220] Checking for updates...
	I0729 16:53:06.936604    5024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:53:06.939572    5024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:53:06.942581    5024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:53:06.945630    5024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:53:06.948598    5024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:53:06.952056    5024 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:06.952125    5024 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:53:06.952176    5024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:53:06.956622    5024 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:53:06.963642    5024 start.go:297] selected driver: qemu2
	I0729 16:53:06.963650    5024 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:53:06.963657    5024 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:53:06.965928    5024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:53:06.968653    5024 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:53:06.970036    5024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:53:06.970080    5024 cni.go:84] Creating CNI manager for ""
	I0729 16:53:06.970088    5024 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:53:06.970092    5024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:53:06.970126    5024 start.go:340] cluster config:
	{Name:auto-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:53:06.973946    5024 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:53:06.981587    5024 out.go:177] * Starting "auto-600000" primary control-plane node in "auto-600000" cluster
	I0729 16:53:06.985618    5024 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:53:06.985635    5024 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:53:06.985646    5024 cache.go:56] Caching tarball of preloaded images
	I0729 16:53:06.985706    5024 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:53:06.985711    5024 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:53:06.985775    5024 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/auto-600000/config.json ...
	I0729 16:53:06.985785    5024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/auto-600000/config.json: {Name:mk9fa8b3cca89f3dc84e0f4a57ef3a31afafe8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:53:06.986129    5024 start.go:360] acquireMachinesLock for auto-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:06.986163    5024 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "auto-600000"
	I0729 16:53:06.986175    5024 start.go:93] Provisioning new machine with config: &{Name:auto-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:06.986215    5024 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:06.990646    5024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:07.008180    5024 start.go:159] libmachine.API.Create for "auto-600000" (driver="qemu2")
	I0729 16:53:07.008213    5024 client.go:168] LocalClient.Create starting
	I0729 16:53:07.008279    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:07.008309    5024 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:07.008319    5024 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:07.008358    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:07.008384    5024 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:07.008390    5024 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:07.008755    5024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:07.159955    5024 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:07.306518    5024 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:07.306527    5024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:07.306747    5024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2
	I0729 16:53:07.316324    5024 main.go:141] libmachine: STDOUT: 
	I0729 16:53:07.316337    5024 main.go:141] libmachine: STDERR: 
	I0729 16:53:07.316381    5024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2 +20000M
	I0729 16:53:07.324393    5024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:07.324407    5024 main.go:141] libmachine: STDERR: 
	I0729 16:53:07.324426    5024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2
	I0729 16:53:07.324431    5024 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:07.324441    5024 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:07.324472    5024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:d2:80:b6:fa:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2
	I0729 16:53:07.326174    5024 main.go:141] libmachine: STDOUT: 
	I0729 16:53:07.326189    5024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:07.326204    5024 client.go:171] duration metric: took 317.989042ms to LocalClient.Create
	I0729 16:53:09.328366    5024 start.go:128] duration metric: took 2.342155458s to createHost
	I0729 16:53:09.328430    5024 start.go:83] releasing machines lock for "auto-600000", held for 2.342291333s
	W0729 16:53:09.328529    5024 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:09.338774    5024 out.go:177] * Deleting "auto-600000" in qemu2 ...
	W0729 16:53:09.365300    5024 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:09.365324    5024 start.go:729] Will try again in 5 seconds ...
	I0729 16:53:14.367524    5024 start.go:360] acquireMachinesLock for auto-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:14.368106    5024 start.go:364] duration metric: took 448.166µs to acquireMachinesLock for "auto-600000"
	I0729 16:53:14.368257    5024 start.go:93] Provisioning new machine with config: &{Name:auto-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:14.368588    5024 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:14.378377    5024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:14.423816    5024 start.go:159] libmachine.API.Create for "auto-600000" (driver="qemu2")
	I0729 16:53:14.423870    5024 client.go:168] LocalClient.Create starting
	I0729 16:53:14.423993    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:14.424063    5024 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:14.424083    5024 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:14.424146    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:14.424191    5024 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:14.424204    5024 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:14.424745    5024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:14.581976    5024 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:14.740980    5024 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:14.741000    5024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:14.741211    5024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2
	I0729 16:53:14.751184    5024 main.go:141] libmachine: STDOUT: 
	I0729 16:53:14.751206    5024 main.go:141] libmachine: STDERR: 
	I0729 16:53:14.751257    5024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2 +20000M
	I0729 16:53:14.759292    5024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:14.759308    5024 main.go:141] libmachine: STDERR: 
	I0729 16:53:14.759322    5024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2
	I0729 16:53:14.759327    5024 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:14.759333    5024 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:14.759362    5024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:97:2b:db:84:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/auto-600000/disk.qcow2
	I0729 16:53:14.761152    5024 main.go:141] libmachine: STDOUT: 
	I0729 16:53:14.761169    5024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:14.761180    5024 client.go:171] duration metric: took 337.309292ms to LocalClient.Create
	I0729 16:53:16.763368    5024 start.go:128] duration metric: took 2.39478325s to createHost
	I0729 16:53:16.763579    5024 start.go:83] releasing machines lock for "auto-600000", held for 2.395347041s
	W0729 16:53:16.763996    5024 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:16.771627    5024 out.go:177] 
	W0729 16:53:16.777718    5024 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:53:16.777746    5024 out.go:239] * 
	* 
	W0729 16:53:16.780701    5024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:53:16.787666    5024 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.921687042s)

                                                
                                                
-- stdout --
	* [flannel-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-600000" primary control-plane node in "flannel-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:18.925427    5133 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:18.925553    5133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:18.925556    5133 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:18.925559    5133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:18.925694    5133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:53:18.926786    5133 out.go:298] Setting JSON to false
	I0729 16:53:18.943198    5133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3161,"bootTime":1722294037,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:53:18.943272    5133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:53:18.949432    5133 out.go:177] * [flannel-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:53:18.957295    5133 notify.go:220] Checking for updates...
	I0729 16:53:18.961142    5133 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:53:18.968214    5133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:53:18.971183    5133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:53:18.974174    5133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:53:18.977214    5133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:53:18.980104    5133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:53:18.983516    5133 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:18.983578    5133 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:53:18.983626    5133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:53:18.987221    5133 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:53:18.994204    5133 start.go:297] selected driver: qemu2
	I0729 16:53:18.994213    5133 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:53:18.994220    5133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:53:18.996419    5133 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:53:18.999210    5133 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:53:19.002244    5133 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:53:19.002258    5133 cni.go:84] Creating CNI manager for "flannel"
	I0729 16:53:19.002261    5133 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 16:53:19.002292    5133 start.go:340] cluster config:
	{Name:flannel-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:53:19.005755    5133 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:53:19.013217    5133 out.go:177] * Starting "flannel-600000" primary control-plane node in "flannel-600000" cluster
	I0729 16:53:19.017187    5133 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:53:19.017205    5133 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:53:19.017217    5133 cache.go:56] Caching tarball of preloaded images
	I0729 16:53:19.017276    5133 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:53:19.017285    5133 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:53:19.017352    5133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/flannel-600000/config.json ...
	I0729 16:53:19.017363    5133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/flannel-600000/config.json: {Name:mkd297c328badfb09d83e7dcbfec7760d6d4b60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:53:19.017713    5133 start.go:360] acquireMachinesLock for flannel-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:19.017741    5133 start.go:364] duration metric: took 23.5µs to acquireMachinesLock for "flannel-600000"
	I0729 16:53:19.017751    5133 start.go:93] Provisioning new machine with config: &{Name:flannel-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:19.017774    5133 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:19.022270    5133 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:19.037425    5133 start.go:159] libmachine.API.Create for "flannel-600000" (driver="qemu2")
	I0729 16:53:19.037450    5133 client.go:168] LocalClient.Create starting
	I0729 16:53:19.037522    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:19.037553    5133 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:19.037563    5133 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:19.037599    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:19.037622    5133 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:19.037631    5133 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:19.038066    5133 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:19.189198    5133 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:19.298573    5133 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:19.298585    5133 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:19.298781    5133 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2
	I0729 16:53:19.307948    5133 main.go:141] libmachine: STDOUT: 
	I0729 16:53:19.307976    5133 main.go:141] libmachine: STDERR: 
	I0729 16:53:19.308027    5133 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2 +20000M
	I0729 16:53:19.316089    5133 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:19.316105    5133 main.go:141] libmachine: STDERR: 
	I0729 16:53:19.316126    5133 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2
	I0729 16:53:19.316131    5133 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:19.316144    5133 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:19.316169    5133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:01:b9:ca:d3:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2
	I0729 16:53:19.317835    5133 main.go:141] libmachine: STDOUT: 
	I0729 16:53:19.317851    5133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:19.317870    5133 client.go:171] duration metric: took 280.419333ms to LocalClient.Create
	I0729 16:53:21.319950    5133 start.go:128] duration metric: took 2.302195166s to createHost
	I0729 16:53:21.319997    5133 start.go:83] releasing machines lock for "flannel-600000", held for 2.302283084s
	W0729 16:53:21.320030    5133 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:21.325212    5133 out.go:177] * Deleting "flannel-600000" in qemu2 ...
	W0729 16:53:21.343021    5133 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:21.343039    5133 start.go:729] Will try again in 5 seconds ...
	I0729 16:53:26.345181    5133 start.go:360] acquireMachinesLock for flannel-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:26.345673    5133 start.go:364] duration metric: took 357.5µs to acquireMachinesLock for "flannel-600000"
	I0729 16:53:26.345740    5133 start.go:93] Provisioning new machine with config: &{Name:flannel-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:26.345954    5133 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:26.352479    5133 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:26.394436    5133 start.go:159] libmachine.API.Create for "flannel-600000" (driver="qemu2")
	I0729 16:53:26.394495    5133 client.go:168] LocalClient.Create starting
	I0729 16:53:26.394615    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:26.394682    5133 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:26.394699    5133 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:26.394776    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:26.394815    5133 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:26.394827    5133 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:26.395395    5133 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:26.555685    5133 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:26.759825    5133 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:26.759835    5133 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:26.760075    5133 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2
	I0729 16:53:26.769858    5133 main.go:141] libmachine: STDOUT: 
	I0729 16:53:26.769887    5133 main.go:141] libmachine: STDERR: 
	I0729 16:53:26.769950    5133 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2 +20000M
	I0729 16:53:26.778233    5133 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:26.778248    5133 main.go:141] libmachine: STDERR: 
	I0729 16:53:26.778261    5133 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2
	I0729 16:53:26.778264    5133 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:26.778278    5133 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:26.778299    5133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:5d:1d:92:ce:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/flannel-600000/disk.qcow2
	I0729 16:53:26.780035    5133 main.go:141] libmachine: STDOUT: 
	I0729 16:53:26.780052    5133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:26.780065    5133 client.go:171] duration metric: took 385.568542ms to LocalClient.Create
	I0729 16:53:28.782127    5133 start.go:128] duration metric: took 2.436183625s to createHost
	I0729 16:53:28.782169    5133 start.go:83] releasing machines lock for "flannel-600000", held for 2.436501917s
	W0729 16:53:28.782309    5133 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:28.792657    5133 out.go:177] 
	W0729 16:53:28.799696    5133 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:53:28.799708    5133 out.go:239] * 
	* 
	W0729 16:53:28.800511    5133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:53:28.810602    5133 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.80356675s)

                                                
                                                
-- stdout --
	* [enable-default-cni-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-600000" primary control-plane node in "enable-default-cni-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:31.180720    5250 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:31.180867    5250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:31.180871    5250 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:31.180874    5250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:31.181020    5250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:53:31.182446    5250 out.go:298] Setting JSON to false
	I0729 16:53:31.199715    5250 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3174,"bootTime":1722294037,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:53:31.199787    5250 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:53:31.205192    5250 out.go:177] * [enable-default-cni-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:53:31.212213    5250 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:53:31.212272    5250 notify.go:220] Checking for updates...
	I0729 16:53:31.220117    5250 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:53:31.228195    5250 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:53:31.232228    5250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:53:31.235320    5250 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:53:31.238186    5250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:53:31.241565    5250 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:31.241627    5250 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:53:31.241685    5250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:53:31.245199    5250 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:53:31.252152    5250 start.go:297] selected driver: qemu2
	I0729 16:53:31.252161    5250 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:53:31.252169    5250 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:53:31.254781    5250 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:53:31.258163    5250 out.go:177] * Automatically selected the socket_vmnet network
	E0729 16:53:31.261328    5250 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 16:53:31.261340    5250 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:53:31.261367    5250 cni.go:84] Creating CNI manager for "bridge"
	I0729 16:53:31.261374    5250 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:53:31.261407    5250 start.go:340] cluster config:
	{Name:enable-default-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:53:31.265619    5250 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:53:31.273166    5250 out.go:177] * Starting "enable-default-cni-600000" primary control-plane node in "enable-default-cni-600000" cluster
	I0729 16:53:31.277230    5250 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:53:31.277285    5250 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:53:31.277297    5250 cache.go:56] Caching tarball of preloaded images
	I0729 16:53:31.277391    5250 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:53:31.277398    5250 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:53:31.277463    5250 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/enable-default-cni-600000/config.json ...
	I0729 16:53:31.277475    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/enable-default-cni-600000/config.json: {Name:mk403d711e78d3fec53960806b004f1d13de0a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:53:31.277768    5250 start.go:360] acquireMachinesLock for enable-default-cni-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:31.277799    5250 start.go:364] duration metric: took 25.959µs to acquireMachinesLock for "enable-default-cni-600000"
	I0729 16:53:31.277811    5250 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:31.277843    5250 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:31.282267    5250 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:31.298817    5250 start.go:159] libmachine.API.Create for "enable-default-cni-600000" (driver="qemu2")
	I0729 16:53:31.298852    5250 client.go:168] LocalClient.Create starting
	I0729 16:53:31.298932    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:31.298970    5250 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:31.298978    5250 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:31.299016    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:31.299039    5250 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:31.299045    5250 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:31.299500    5250 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:31.451755    5250 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:31.487035    5250 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:31.487040    5250 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:31.487215    5250 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2
	I0729 16:53:31.496686    5250 main.go:141] libmachine: STDOUT: 
	I0729 16:53:31.496714    5250 main.go:141] libmachine: STDERR: 
	I0729 16:53:31.496767    5250 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2 +20000M
	I0729 16:53:31.504770    5250 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:31.504784    5250 main.go:141] libmachine: STDERR: 
	I0729 16:53:31.504797    5250 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2
	I0729 16:53:31.504800    5250 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:31.504818    5250 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:31.504842    5250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ce:9b:f6:74:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2
	I0729 16:53:31.506492    5250 main.go:141] libmachine: STDOUT: 
	I0729 16:53:31.506506    5250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:31.506531    5250 client.go:171] duration metric: took 207.675417ms to LocalClient.Create
	I0729 16:53:33.508698    5250 start.go:128] duration metric: took 2.230861333s to createHost
	I0729 16:53:33.508770    5250 start.go:83] releasing machines lock for "enable-default-cni-600000", held for 2.23099475s
	W0729 16:53:33.508844    5250 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:33.514674    5250 out.go:177] * Deleting "enable-default-cni-600000" in qemu2 ...
	W0729 16:53:33.542148    5250 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:33.542188    5250 start.go:729] Will try again in 5 seconds ...
	I0729 16:53:38.544139    5250 start.go:360] acquireMachinesLock for enable-default-cni-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:38.544783    5250 start.go:364] duration metric: took 526.5µs to acquireMachinesLock for "enable-default-cni-600000"
	I0729 16:53:38.545141    5250 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:38.545422    5250 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:38.555095    5250 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:38.602518    5250 start.go:159] libmachine.API.Create for "enable-default-cni-600000" (driver="qemu2")
	I0729 16:53:38.602569    5250 client.go:168] LocalClient.Create starting
	I0729 16:53:38.602668    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:38.602723    5250 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:38.602737    5250 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:38.602804    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:38.602847    5250 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:38.602858    5250 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:38.603374    5250 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:38.762502    5250 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:38.895456    5250 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:38.895465    5250 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:38.895698    5250 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2
	I0729 16:53:38.906511    5250 main.go:141] libmachine: STDOUT: 
	I0729 16:53:38.906532    5250 main.go:141] libmachine: STDERR: 
	I0729 16:53:38.906579    5250 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2 +20000M
	I0729 16:53:38.916023    5250 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:38.916047    5250 main.go:141] libmachine: STDERR: 
	I0729 16:53:38.916061    5250 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2
	I0729 16:53:38.916065    5250 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:38.916077    5250 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:38.916099    5250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ac:ef:57:d5:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/enable-default-cni-600000/disk.qcow2
	I0729 16:53:38.918355    5250 main.go:141] libmachine: STDOUT: 
	I0729 16:53:38.918375    5250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:38.918388    5250 client.go:171] duration metric: took 315.819583ms to LocalClient.Create
	I0729 16:53:40.920451    5250 start.go:128] duration metric: took 2.375044083s to createHost
	I0729 16:53:40.920487    5250 start.go:83] releasing machines lock for "enable-default-cni-600000", held for 2.375539333s
	W0729 16:53:40.920682    5250 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:40.930030    5250 out.go:177] 
	W0729 16:53:40.934071    5250 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:53:40.934080    5250 out.go:239] * 
	* 
	W0729 16:53:40.934630    5250 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:53:40.944869    5250 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.899984s)

                                                
                                                
-- stdout --
	* [bridge-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-600000" primary control-plane node in "bridge-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:43.099395    5365 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:43.099524    5365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:43.099528    5365 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:43.099531    5365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:43.099679    5365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:53:43.100814    5365 out.go:298] Setting JSON to false
	I0729 16:53:43.117166    5365 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3186,"bootTime":1722294037,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:53:43.117243    5365 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:53:43.122272    5365 out.go:177] * [bridge-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:53:43.129226    5365 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:53:43.129317    5365 notify.go:220] Checking for updates...
	I0729 16:53:43.136194    5365 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:53:43.139158    5365 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:53:43.142164    5365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:53:43.145194    5365 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:53:43.146541    5365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:53:43.149481    5365 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:43.149555    5365 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:53:43.149602    5365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:53:43.154246    5365 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:53:43.159189    5365 start.go:297] selected driver: qemu2
	I0729 16:53:43.159195    5365 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:53:43.159200    5365 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:53:43.161495    5365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:53:43.165194    5365 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:53:43.168269    5365 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:53:43.168283    5365 cni.go:84] Creating CNI manager for "bridge"
	I0729 16:53:43.168291    5365 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:53:43.168323    5365 start.go:340] cluster config:
	{Name:bridge-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:53:43.171706    5365 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:53:43.179231    5365 out.go:177] * Starting "bridge-600000" primary control-plane node in "bridge-600000" cluster
	I0729 16:53:43.183104    5365 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:53:43.183115    5365 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:53:43.183126    5365 cache.go:56] Caching tarball of preloaded images
	I0729 16:53:43.183171    5365 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:53:43.183176    5365 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:53:43.183233    5365 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/bridge-600000/config.json ...
	I0729 16:53:43.183251    5365 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/bridge-600000/config.json: {Name:mkf5916705f2ebefbeb489febcc47ca7e083f7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:53:43.183585    5365 start.go:360] acquireMachinesLock for bridge-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:43.183617    5365 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "bridge-600000"
	I0729 16:53:43.183629    5365 start.go:93] Provisioning new machine with config: &{Name:bridge-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:43.183657    5365 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:43.187221    5365 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:43.202008    5365 start.go:159] libmachine.API.Create for "bridge-600000" (driver="qemu2")
	I0729 16:53:43.202033    5365 client.go:168] LocalClient.Create starting
	I0729 16:53:43.202100    5365 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:43.202131    5365 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:43.202141    5365 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:43.202182    5365 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:43.202203    5365 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:43.202211    5365 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:43.202539    5365 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:43.356200    5365 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:43.434073    5365 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:43.434079    5365 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:43.434278    5365 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2
	I0729 16:53:43.443397    5365 main.go:141] libmachine: STDOUT: 
	I0729 16:53:43.443419    5365 main.go:141] libmachine: STDERR: 
	I0729 16:53:43.443466    5365 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2 +20000M
	I0729 16:53:43.451874    5365 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:43.451897    5365 main.go:141] libmachine: STDERR: 
	I0729 16:53:43.451913    5365 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2
	I0729 16:53:43.451917    5365 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:43.451930    5365 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:43.451962    5365 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:47:a3:cc:11:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2
	I0729 16:53:43.453919    5365 main.go:141] libmachine: STDOUT: 
	I0729 16:53:43.453936    5365 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:43.453956    5365 client.go:171] duration metric: took 251.921958ms to LocalClient.Create
	I0729 16:53:45.456188    5365 start.go:128] duration metric: took 2.272527583s to createHost
	I0729 16:53:45.456276    5365 start.go:83] releasing machines lock for "bridge-600000", held for 2.27268225s
	W0729 16:53:45.456317    5365 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:45.470163    5365 out.go:177] * Deleting "bridge-600000" in qemu2 ...
	W0729 16:53:45.488906    5365 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:45.488927    5365 start.go:729] Will try again in 5 seconds ...
	I0729 16:53:50.491082    5365 start.go:360] acquireMachinesLock for bridge-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:50.491282    5365 start.go:364] duration metric: took 154.25µs to acquireMachinesLock for "bridge-600000"
	I0729 16:53:50.491336    5365 start.go:93] Provisioning new machine with config: &{Name:bridge-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:50.491409    5365 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:50.500205    5365 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:50.517457    5365 start.go:159] libmachine.API.Create for "bridge-600000" (driver="qemu2")
	I0729 16:53:50.517490    5365 client.go:168] LocalClient.Create starting
	I0729 16:53:50.517556    5365 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:50.517595    5365 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:50.517605    5365 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:50.517639    5365 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:50.517662    5365 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:50.517674    5365 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:50.517974    5365 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:50.669633    5365 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:50.907834    5365 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:50.907848    5365 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:50.908042    5365 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2
	I0729 16:53:50.917420    5365 main.go:141] libmachine: STDOUT: 
	I0729 16:53:50.917442    5365 main.go:141] libmachine: STDERR: 
	I0729 16:53:50.917519    5365 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2 +20000M
	I0729 16:53:50.925418    5365 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:50.925438    5365 main.go:141] libmachine: STDERR: 
	I0729 16:53:50.925453    5365 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2
	I0729 16:53:50.925463    5365 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:50.925483    5365 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:50.925518    5365 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:ad:10:fc:f4:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/bridge-600000/disk.qcow2
	I0729 16:53:50.927249    5365 main.go:141] libmachine: STDOUT: 
	I0729 16:53:50.927264    5365 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:50.927276    5365 client.go:171] duration metric: took 409.787875ms to LocalClient.Create
	I0729 16:53:52.929477    5365 start.go:128] duration metric: took 2.438066083s to createHost
	I0729 16:53:52.929644    5365 start.go:83] releasing machines lock for "bridge-600000", held for 2.438363542s
	W0729 16:53:52.930072    5365 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:52.941307    5365 out.go:177] 
	W0729 16:53:52.946749    5365 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:53:52.946791    5365 out.go:239] * 
	* 
	W0729 16:53:52.950044    5365 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:53:52.958580    5365 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.81057125s)

                                                
                                                
-- stdout --
	* [kubenet-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-600000" primary control-plane node in "kubenet-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:55.175524    5479 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:55.175657    5479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:55.175660    5479 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:55.175663    5479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:55.175778    5479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:53:55.176876    5479 out.go:298] Setting JSON to false
	I0729 16:53:55.193524    5479 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3198,"bootTime":1722294037,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:53:55.193596    5479 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:53:55.200683    5479 out.go:177] * [kubenet-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:53:55.208627    5479 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:53:55.208676    5479 notify.go:220] Checking for updates...
	I0729 16:53:55.216217    5479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:53:55.220752    5479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:53:55.223681    5479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:53:55.226592    5479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:53:55.229633    5479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:53:55.232948    5479 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:55.233016    5479 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:53:55.233056    5479 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:53:55.236571    5479 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:53:55.243623    5479 start.go:297] selected driver: qemu2
	I0729 16:53:55.243629    5479 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:53:55.243635    5479 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:53:55.245866    5479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:53:55.247343    5479 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:53:55.250667    5479 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:53:55.250697    5479 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 16:53:55.250728    5479 start.go:340] cluster config:
	{Name:kubenet-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:53:55.254439    5479 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:53:55.261622    5479 out.go:177] * Starting "kubenet-600000" primary control-plane node in "kubenet-600000" cluster
	I0729 16:53:55.265640    5479 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:53:55.265659    5479 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:53:55.265674    5479 cache.go:56] Caching tarball of preloaded images
	I0729 16:53:55.265743    5479 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:53:55.265748    5479 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:53:55.265813    5479 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/kubenet-600000/config.json ...
	I0729 16:53:55.265824    5479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/kubenet-600000/config.json: {Name:mk6f7107dd5cb9a1510c97a77b13ea70330c4995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:53:55.266156    5479 start.go:360] acquireMachinesLock for kubenet-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:53:55.266187    5479 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "kubenet-600000"
	I0729 16:53:55.266198    5479 start.go:93] Provisioning new machine with config: &{Name:kubenet-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:53:55.266230    5479 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:53:55.270572    5479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:53:55.286274    5479 start.go:159] libmachine.API.Create for "kubenet-600000" (driver="qemu2")
	I0729 16:53:55.286300    5479 client.go:168] LocalClient.Create starting
	I0729 16:53:55.286369    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:53:55.286402    5479 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:55.286414    5479 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:55.286455    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:53:55.286478    5479 main.go:141] libmachine: Decoding PEM data...
	I0729 16:53:55.286488    5479 main.go:141] libmachine: Parsing certificate...
	I0729 16:53:55.286849    5479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:53:55.437886    5479 main.go:141] libmachine: Creating SSH key...
	I0729 16:53:55.556119    5479 main.go:141] libmachine: Creating Disk image...
	I0729 16:53:55.556130    5479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:53:55.556333    5479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2
	I0729 16:53:55.565491    5479 main.go:141] libmachine: STDOUT: 
	I0729 16:53:55.565511    5479 main.go:141] libmachine: STDERR: 
	I0729 16:53:55.565562    5479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2 +20000M
	I0729 16:53:55.573694    5479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:53:55.573709    5479 main.go:141] libmachine: STDERR: 
	I0729 16:53:55.573721    5479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2
	I0729 16:53:55.573725    5479 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:53:55.573737    5479 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:53:55.573761    5479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:c9:c7:10:de:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2
	I0729 16:53:55.575398    5479 main.go:141] libmachine: STDOUT: 
	I0729 16:53:55.575413    5479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:53:55.575439    5479 client.go:171] duration metric: took 289.138625ms to LocalClient.Create
	I0729 16:53:57.577715    5479 start.go:128] duration metric: took 2.311488834s to createHost
	I0729 16:53:57.577778    5479 start.go:83] releasing machines lock for "kubenet-600000", held for 2.311616791s
	W0729 16:53:57.577815    5479 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:57.586163    5479 out.go:177] * Deleting "kubenet-600000" in qemu2 ...
	W0729 16:53:57.609579    5479 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:53:57.609593    5479 start.go:729] Will try again in 5 seconds ...
	I0729 16:54:02.611580    5479 start.go:360] acquireMachinesLock for kubenet-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:02.611689    5479 start.go:364] duration metric: took 87.459µs to acquireMachinesLock for "kubenet-600000"
	I0729 16:54:02.611703    5479 start.go:93] Provisioning new machine with config: &{Name:kubenet-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:02.611763    5479 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:02.619009    5479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:54:02.635576    5479 start.go:159] libmachine.API.Create for "kubenet-600000" (driver="qemu2")
	I0729 16:54:02.635607    5479 client.go:168] LocalClient.Create starting
	I0729 16:54:02.635682    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:02.635717    5479 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:02.635724    5479 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:02.635759    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:02.635782    5479 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:02.635788    5479 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:02.636095    5479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:02.788475    5479 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:02.895273    5479 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:02.895283    5479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:02.895488    5479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2
	I0729 16:54:02.904989    5479 main.go:141] libmachine: STDOUT: 
	I0729 16:54:02.905013    5479 main.go:141] libmachine: STDERR: 
	I0729 16:54:02.905070    5479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2 +20000M
	I0729 16:54:02.913132    5479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:02.913147    5479 main.go:141] libmachine: STDERR: 
	I0729 16:54:02.913162    5479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2
	I0729 16:54:02.913165    5479 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:02.913176    5479 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:02.913213    5479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c7:89:83:40:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/kubenet-600000/disk.qcow2
	I0729 16:54:02.914895    5479 main.go:141] libmachine: STDOUT: 
	I0729 16:54:02.914908    5479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:02.914922    5479 client.go:171] duration metric: took 279.316166ms to LocalClient.Create
	I0729 16:54:04.917211    5479 start.go:128] duration metric: took 2.305428084s to createHost
	I0729 16:54:04.917296    5479 start.go:83] releasing machines lock for "kubenet-600000", held for 2.305629417s
	W0729 16:54:04.917686    5479 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:04.927326    5479 out.go:177] 
	W0729 16:54:04.933400    5479 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:04.933425    5479 out.go:239] * 
	* 
	W0729 16:54:04.936201    5479 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:54:04.943393    5479 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.877148167s)

                                                
                                                
-- stdout --
	* [custom-flannel-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-600000" primary control-plane node in "custom-flannel-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:07.155815    5590 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:07.155940    5590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:07.155943    5590 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:07.155946    5590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:07.156063    5590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:54:07.157136    5590 out.go:298] Setting JSON to false
	I0729 16:54:07.173782    5590 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3210,"bootTime":1722294037,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:54:07.173866    5590 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:54:07.181070    5590 out.go:177] * [custom-flannel-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:54:07.189079    5590 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:54:07.189135    5590 notify.go:220] Checking for updates...
	I0729 16:54:07.196076    5590 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:54:07.199027    5590 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:54:07.201978    5590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:54:07.205025    5590 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:54:07.208080    5590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:54:07.211327    5590 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:07.211396    5590 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:54:07.211454    5590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:54:07.215990    5590 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:54:07.223047    5590 start.go:297] selected driver: qemu2
	I0729 16:54:07.223056    5590 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:54:07.223063    5590 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:54:07.225427    5590 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:54:07.228998    5590 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:54:07.232125    5590 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:54:07.232138    5590 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 16:54:07.232145    5590 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 16:54:07.232178    5590 start.go:340] cluster config:
	{Name:custom-flannel-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:07.235880    5590 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:54:07.243009    5590 out.go:177] * Starting "custom-flannel-600000" primary control-plane node in "custom-flannel-600000" cluster
	I0729 16:54:07.246883    5590 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:54:07.246899    5590 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:54:07.246910    5590 cache.go:56] Caching tarball of preloaded images
	I0729 16:54:07.246979    5590 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:54:07.246985    5590 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:54:07.247042    5590 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/custom-flannel-600000/config.json ...
	I0729 16:54:07.247056    5590 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/custom-flannel-600000/config.json: {Name:mk16e7c3d15e94f03a7f8a377076501204c65306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:54:07.247410    5590 start.go:360] acquireMachinesLock for custom-flannel-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:07.247445    5590 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "custom-flannel-600000"
	I0729 16:54:07.247457    5590 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:07.247487    5590 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:07.256050    5590 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:54:07.273453    5590 start.go:159] libmachine.API.Create for "custom-flannel-600000" (driver="qemu2")
	I0729 16:54:07.273477    5590 client.go:168] LocalClient.Create starting
	I0729 16:54:07.273549    5590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:07.273584    5590 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:07.273592    5590 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:07.273629    5590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:07.273652    5590 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:07.273660    5590 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:07.274054    5590 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:07.424329    5590 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:07.622093    5590 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:07.622111    5590 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:07.622363    5590 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2
	I0729 16:54:07.632327    5590 main.go:141] libmachine: STDOUT: 
	I0729 16:54:07.632349    5590 main.go:141] libmachine: STDERR: 
	I0729 16:54:07.632409    5590 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2 +20000M
	I0729 16:54:07.640487    5590 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:07.640503    5590 main.go:141] libmachine: STDERR: 
	I0729 16:54:07.640520    5590 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2
	I0729 16:54:07.640526    5590 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:07.640540    5590 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:07.640569    5590 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:53:4d:83:b7:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2
	I0729 16:54:07.642284    5590 main.go:141] libmachine: STDOUT: 
	I0729 16:54:07.642300    5590 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:07.642330    5590 client.go:171] duration metric: took 368.85475ms to LocalClient.Create
	I0729 16:54:09.644481    5590 start.go:128] duration metric: took 2.397087s to createHost
	I0729 16:54:09.644569    5590 start.go:83] releasing machines lock for "custom-flannel-600000", held for 2.397241209s
	W0729 16:54:09.644662    5590 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:09.652065    5590 out.go:177] * Deleting "custom-flannel-600000" in qemu2 ...
	W0729 16:54:09.678383    5590 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:09.678409    5590 start.go:729] Will try again in 5 seconds ...
	I0729 16:54:14.675067    5590 start.go:360] acquireMachinesLock for custom-flannel-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:14.675328    5590 start.go:364] duration metric: took 211.542µs to acquireMachinesLock for "custom-flannel-600000"
	I0729 16:54:14.675396    5590 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:14.675544    5590 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:14.679948    5590 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:54:14.706260    5590 start.go:159] libmachine.API.Create for "custom-flannel-600000" (driver="qemu2")
	I0729 16:54:14.706286    5590 client.go:168] LocalClient.Create starting
	I0729 16:54:14.706367    5590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:14.706418    5590 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:14.706435    5590 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:14.706478    5590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:14.706508    5590 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:14.706532    5590 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:14.707174    5590 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:14.862896    5590 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:14.936696    5590 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:14.936707    5590 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:14.936912    5590 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2
	I0729 16:54:14.946256    5590 main.go:141] libmachine: STDOUT: 
	I0729 16:54:14.946275    5590 main.go:141] libmachine: STDERR: 
	I0729 16:54:14.946318    5590 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2 +20000M
	I0729 16:54:14.954228    5590 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:14.954242    5590 main.go:141] libmachine: STDERR: 
	I0729 16:54:14.954254    5590 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2
	I0729 16:54:14.954260    5590 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:14.954268    5590 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:14.954308    5590 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c7:3d:9b:77:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/custom-flannel-600000/disk.qcow2
	I0729 16:54:14.955954    5590 main.go:141] libmachine: STDOUT: 
	I0729 16:54:14.955967    5590 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:14.955980    5590 client.go:171] duration metric: took 249.916375ms to LocalClient.Create
	I0729 16:54:16.956472    5590 start.go:128] duration metric: took 2.282871667s to createHost
	I0729 16:54:16.956592    5590 start.go:83] releasing machines lock for "custom-flannel-600000", held for 2.283221625s
	W0729 16:54:16.957034    5590 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:16.966700    5590 out.go:177] 
	W0729 16:54:16.972848    5590 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:16.972877    5590 out.go:239] * 
	* 
	W0729 16:54:16.975354    5590 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:54:16.983721    5590 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.799820625s)

                                                
                                                
-- stdout --
	* [calico-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-600000" primary control-plane node in "calico-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:19.392774    5710 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:19.392917    5710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:19.392920    5710 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:19.392923    5710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:19.393055    5710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:54:19.394130    5710 out.go:298] Setting JSON to false
	I0729 16:54:19.410283    5710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3222,"bootTime":1722294037,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:54:19.410352    5710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:54:19.416164    5710 out.go:177] * [calico-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:54:19.424075    5710 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:54:19.424167    5710 notify.go:220] Checking for updates...
	I0729 16:54:19.430955    5710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:54:19.434020    5710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:54:19.437037    5710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:54:19.438141    5710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:54:19.440993    5710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:54:19.449385    5710 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:19.449452    5710 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:54:19.449502    5710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:54:19.452892    5710 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:54:19.459971    5710 start.go:297] selected driver: qemu2
	I0729 16:54:19.459977    5710 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:54:19.459983    5710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:54:19.462344    5710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:54:19.463500    5710 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:54:19.466012    5710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:54:19.466047    5710 cni.go:84] Creating CNI manager for "calico"
	I0729 16:54:19.466052    5710 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 16:54:19.466110    5710 start.go:340] cluster config:
	{Name:calico-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:19.469571    5710 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:54:19.476950    5710 out.go:177] * Starting "calico-600000" primary control-plane node in "calico-600000" cluster
	I0729 16:54:19.480973    5710 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:54:19.480997    5710 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:54:19.481009    5710 cache.go:56] Caching tarball of preloaded images
	I0729 16:54:19.481081    5710 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:54:19.481098    5710 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:54:19.481149    5710 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/calico-600000/config.json ...
	I0729 16:54:19.481163    5710 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/calico-600000/config.json: {Name:mke49c901edf4142279d13544f93eded298d02f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:54:19.481488    5710 start.go:360] acquireMachinesLock for calico-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:19.481520    5710 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "calico-600000"
	I0729 16:54:19.481531    5710 start.go:93] Provisioning new machine with config: &{Name:calico-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:19.481568    5710 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:19.489952    5710 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:54:19.504825    5710 start.go:159] libmachine.API.Create for "calico-600000" (driver="qemu2")
	I0729 16:54:19.504850    5710 client.go:168] LocalClient.Create starting
	I0729 16:54:19.504912    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:19.504941    5710 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:19.504954    5710 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:19.504991    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:19.505015    5710 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:19.505024    5710 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:19.505403    5710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:19.657739    5710 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:19.751533    5710 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:19.751541    5710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:19.751744    5710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2
	I0729 16:54:19.761086    5710 main.go:141] libmachine: STDOUT: 
	I0729 16:54:19.761113    5710 main.go:141] libmachine: STDERR: 
	I0729 16:54:19.761154    5710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2 +20000M
	I0729 16:54:19.769411    5710 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:19.769427    5710 main.go:141] libmachine: STDERR: 
	I0729 16:54:19.769440    5710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2
	I0729 16:54:19.769444    5710 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:19.769457    5710 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:19.769481    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:d8:67:98:4b:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2
	I0729 16:54:19.771158    5710 main.go:141] libmachine: STDOUT: 
	I0729 16:54:19.771174    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:19.771191    5710 client.go:171] duration metric: took 266.517083ms to LocalClient.Create
	I0729 16:54:21.772022    5710 start.go:128] duration metric: took 2.291908417s to createHost
	I0729 16:54:21.772080    5710 start.go:83] releasing machines lock for "calico-600000", held for 2.292020125s
	W0729 16:54:21.772109    5710 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:21.781542    5710 out.go:177] * Deleting "calico-600000" in qemu2 ...
	W0729 16:54:21.805285    5710 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:21.805306    5710 start.go:729] Will try again in 5 seconds ...
	I0729 16:54:26.804896    5710 start.go:360] acquireMachinesLock for calico-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:26.805420    5710 start.go:364] duration metric: took 423.167µs to acquireMachinesLock for "calico-600000"
	I0729 16:54:26.805513    5710 start.go:93] Provisioning new machine with config: &{Name:calico-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:26.805727    5710 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:26.813293    5710 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:54:26.850433    5710 start.go:159] libmachine.API.Create for "calico-600000" (driver="qemu2")
	I0729 16:54:26.850475    5710 client.go:168] LocalClient.Create starting
	I0729 16:54:26.850581    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:26.850649    5710 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:26.850666    5710 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:26.850747    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:26.850787    5710 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:26.850807    5710 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:26.851279    5710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:27.008785    5710 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:27.105098    5710 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:27.105104    5710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:27.105300    5710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2
	I0729 16:54:27.114624    5710 main.go:141] libmachine: STDOUT: 
	I0729 16:54:27.114686    5710 main.go:141] libmachine: STDERR: 
	I0729 16:54:27.114735    5710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2 +20000M
	I0729 16:54:27.122669    5710 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:27.122686    5710 main.go:141] libmachine: STDERR: 
	I0729 16:54:27.122697    5710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2
	I0729 16:54:27.122701    5710 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:27.122711    5710 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:27.122745    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:c0:b3:25:33:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/calico-600000/disk.qcow2
	I0729 16:54:27.124457    5710 main.go:141] libmachine: STDOUT: 
	I0729 16:54:27.124472    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:27.124485    5710 client.go:171] duration metric: took 274.122667ms to LocalClient.Create
	I0729 16:54:29.125781    5710 start.go:128] duration metric: took 2.320985083s to createHost
	I0729 16:54:29.125817    5710 start.go:83] releasing machines lock for "calico-600000", held for 2.321301583s
	W0729 16:54:29.125940    5710 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:29.133282    5710 out.go:177] 
	W0729 16:54:29.138279    5710 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:29.138293    5710 out.go:239] * 
	* 
	W0729 16:54:29.139128    5710 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:54:29.150125    5710 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-600000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.715460292s)

                                                
                                                
-- stdout --
	* [false-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-600000" primary control-plane node in "false-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:31.497763    5829 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:31.497892    5829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:31.497896    5829 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:31.497897    5829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:31.498023    5829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:54:31.499058    5829 out.go:298] Setting JSON to false
	I0729 16:54:31.515321    5829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3234,"bootTime":1722294037,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:54:31.515392    5829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:54:31.522637    5829 out.go:177] * [false-600000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:54:31.530517    5829 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:54:31.530563    5829 notify.go:220] Checking for updates...
	I0729 16:54:31.538533    5829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:54:31.541528    5829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:54:31.544582    5829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:54:31.547465    5829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:54:31.550479    5829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:54:31.553832    5829 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:31.553903    5829 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:54:31.553959    5829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:54:31.558522    5829 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:54:31.565520    5829 start.go:297] selected driver: qemu2
	I0729 16:54:31.565527    5829 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:54:31.565533    5829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:54:31.567847    5829 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:54:31.570483    5829 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:54:31.573553    5829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:54:31.573566    5829 cni.go:84] Creating CNI manager for "false"
	I0729 16:54:31.573594    5829 start.go:340] cluster config:
	{Name:false-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:31.577218    5829 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:54:31.584506    5829 out.go:177] * Starting "false-600000" primary control-plane node in "false-600000" cluster
	I0729 16:54:31.588498    5829 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:54:31.588511    5829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:54:31.588521    5829 cache.go:56] Caching tarball of preloaded images
	I0729 16:54:31.588575    5829 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:54:31.588580    5829 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:54:31.588626    5829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/false-600000/config.json ...
	I0729 16:54:31.588635    5829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/false-600000/config.json: {Name:mk10badbd55fa4aff8cf03d00620d5d974705011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:54:31.588952    5829 start.go:360] acquireMachinesLock for false-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:31.588981    5829 start.go:364] duration metric: took 24.084µs to acquireMachinesLock for "false-600000"
	I0729 16:54:31.588991    5829 start.go:93] Provisioning new machine with config: &{Name:false-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:31.589021    5829 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:31.593516    5829 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:54:31.609189    5829 start.go:159] libmachine.API.Create for "false-600000" (driver="qemu2")
	I0729 16:54:31.609215    5829 client.go:168] LocalClient.Create starting
	I0729 16:54:31.609274    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:31.609306    5829 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:31.609316    5829 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:31.609354    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:31.609377    5829 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:31.609385    5829 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:31.609826    5829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:31.762968    5829 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:31.807182    5829 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:31.807192    5829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:31.807393    5829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2
	I0729 16:54:31.816625    5829 main.go:141] libmachine: STDOUT: 
	I0729 16:54:31.816643    5829 main.go:141] libmachine: STDERR: 
	I0729 16:54:31.816696    5829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2 +20000M
	I0729 16:54:31.824762    5829 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:31.824777    5829 main.go:141] libmachine: STDERR: 
	I0729 16:54:31.824798    5829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2
	I0729 16:54:31.824804    5829 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:31.824816    5829 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:31.824840    5829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:5f:6a:9f:6f:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2
	I0729 16:54:31.826508    5829 main.go:141] libmachine: STDOUT: 
	I0729 16:54:31.826521    5829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:31.826541    5829 client.go:171] duration metric: took 217.391125ms to LocalClient.Create
	I0729 16:54:33.828301    5829 start.go:128] duration metric: took 2.239887875s to createHost
	I0729 16:54:33.828412    5829 start.go:83] releasing machines lock for "false-600000", held for 2.240101667s
	W0729 16:54:33.828466    5829 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:33.834917    5829 out.go:177] * Deleting "false-600000" in qemu2 ...
	W0729 16:54:33.864035    5829 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:33.864076    5829 start.go:729] Will try again in 5 seconds ...
	I0729 16:54:38.865081    5829 start.go:360] acquireMachinesLock for false-600000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:38.865498    5829 start.go:364] duration metric: took 332.959µs to acquireMachinesLock for "false-600000"
	I0729 16:54:38.865628    5829 start.go:93] Provisioning new machine with config: &{Name:false-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:38.865858    5829 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:38.874198    5829 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:54:38.924158    5829 start.go:159] libmachine.API.Create for "false-600000" (driver="qemu2")
	I0729 16:54:38.924211    5829 client.go:168] LocalClient.Create starting
	I0729 16:54:38.924320    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:38.924391    5829 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:38.924406    5829 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:38.924463    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:38.924508    5829 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:38.924526    5829 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:38.925107    5829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:39.083718    5829 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:39.130223    5829 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:39.130233    5829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:39.130422    5829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2
	I0729 16:54:39.139681    5829 main.go:141] libmachine: STDOUT: 
	I0729 16:54:39.139704    5829 main.go:141] libmachine: STDERR: 
	I0729 16:54:39.139756    5829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2 +20000M
	I0729 16:54:39.147950    5829 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:39.147969    5829 main.go:141] libmachine: STDERR: 
	I0729 16:54:39.147981    5829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2
	I0729 16:54:39.147986    5829 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:39.147995    5829 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:39.148030    5829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:79:79:60:85:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/false-600000/disk.qcow2
	I0729 16:54:39.149693    5829 main.go:141] libmachine: STDOUT: 
	I0729 16:54:39.149838    5829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:39.149851    5829 client.go:171] duration metric: took 225.683208ms to LocalClient.Create
	I0729 16:54:41.151586    5829 start.go:128] duration metric: took 2.286151875s to createHost
	I0729 16:54:41.151641    5829 start.go:83] releasing machines lock for "false-600000", held for 2.286577s
	W0729 16:54:41.151862    5829 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:41.156406    5829 out.go:177] 
	W0729 16:54:41.163335    5829 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:41.163349    5829 out.go:239] * 
	* 
	W0729 16:54:41.164675    5829 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:54:41.174312    5829 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.725552875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-356000" primary control-plane node in "old-k8s-version-356000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-356000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:43.532451    5947 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:43.532591    5947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:43.532595    5947 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:43.532597    5947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:43.532775    5947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:54:43.533955    5947 out.go:298] Setting JSON to false
	I0729 16:54:43.550246    5947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3246,"bootTime":1722294037,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:54:43.550310    5947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:54:43.556394    5947 out.go:177] * [old-k8s-version-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:54:43.563328    5947 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:54:43.563376    5947 notify.go:220] Checking for updates...
	I0729 16:54:43.571175    5947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:54:43.574240    5947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:54:43.577300    5947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:54:43.580227    5947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:54:43.583242    5947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:54:43.586663    5947 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:43.586741    5947 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:54:43.586787    5947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:54:43.590246    5947 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:54:43.597255    5947 start.go:297] selected driver: qemu2
	I0729 16:54:43.597262    5947 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:54:43.597268    5947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:54:43.599445    5947 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:54:43.600735    5947 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:54:43.603383    5947 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:54:43.603425    5947 cni.go:84] Creating CNI manager for ""
	I0729 16:54:43.603434    5947 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:54:43.603466    5947 start.go:340] cluster config:
	{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:43.607140    5947 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:54:43.614235    5947 out.go:177] * Starting "old-k8s-version-356000" primary control-plane node in "old-k8s-version-356000" cluster
	I0729 16:54:43.618264    5947 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:54:43.618278    5947 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:54:43.618289    5947 cache.go:56] Caching tarball of preloaded images
	I0729 16:54:43.618345    5947 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:54:43.618363    5947 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:54:43.618419    5947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/old-k8s-version-356000/config.json ...
	I0729 16:54:43.618432    5947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/old-k8s-version-356000/config.json: {Name:mkf88c65085db285f09a20fa51f1b342364c9c41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:54:43.618707    5947 start.go:360] acquireMachinesLock for old-k8s-version-356000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:43.618743    5947 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "old-k8s-version-356000"
	I0729 16:54:43.618756    5947 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:43.618791    5947 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:43.623267    5947 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:54:43.638769    5947 start.go:159] libmachine.API.Create for "old-k8s-version-356000" (driver="qemu2")
	I0729 16:54:43.638793    5947 client.go:168] LocalClient.Create starting
	I0729 16:54:43.638858    5947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:43.638894    5947 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:43.638904    5947 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:43.638941    5947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:43.638963    5947 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:43.638969    5947 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:43.639284    5947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:43.789942    5947 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:43.836335    5947 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:43.836340    5947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:43.836505    5947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:54:43.845618    5947 main.go:141] libmachine: STDOUT: 
	I0729 16:54:43.845641    5947 main.go:141] libmachine: STDERR: 
	I0729 16:54:43.845708    5947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2 +20000M
	I0729 16:54:43.853759    5947 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:43.853774    5947 main.go:141] libmachine: STDERR: 
	I0729 16:54:43.853786    5947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:54:43.853789    5947 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:43.853802    5947 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:43.853826    5947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c8:86:a1:df:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:54:43.855480    5947 main.go:141] libmachine: STDOUT: 
	I0729 16:54:43.855497    5947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:43.855514    5947 client.go:171] duration metric: took 216.750334ms to LocalClient.Create
	I0729 16:54:45.857419    5947 start.go:128] duration metric: took 2.238940542s to createHost
	I0729 16:54:45.857526    5947 start.go:83] releasing machines lock for "old-k8s-version-356000", held for 2.239106958s
	W0729 16:54:45.857589    5947 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:45.873904    5947 out.go:177] * Deleting "old-k8s-version-356000" in qemu2 ...
	W0729 16:54:45.901156    5947 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:45.901197    5947 start.go:729] Will try again in 5 seconds ...
	I0729 16:54:50.902785    5947 start.go:360] acquireMachinesLock for old-k8s-version-356000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:50.903379    5947 start.go:364] duration metric: took 476.583µs to acquireMachinesLock for "old-k8s-version-356000"
	I0729 16:54:50.903534    5947 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:54:50.903851    5947 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:54:50.914493    5947 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:54:50.957987    5947 start.go:159] libmachine.API.Create for "old-k8s-version-356000" (driver="qemu2")
	I0729 16:54:50.958047    5947 client.go:168] LocalClient.Create starting
	I0729 16:54:50.958168    5947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:54:50.958229    5947 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:50.958243    5947 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:50.958306    5947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:54:50.958355    5947 main.go:141] libmachine: Decoding PEM data...
	I0729 16:54:50.958369    5947 main.go:141] libmachine: Parsing certificate...
	I0729 16:54:50.958867    5947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:54:51.118538    5947 main.go:141] libmachine: Creating SSH key...
	I0729 16:54:51.164934    5947 main.go:141] libmachine: Creating Disk image...
	I0729 16:54:51.164944    5947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:54:51.165137    5947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:54:51.174464    5947 main.go:141] libmachine: STDOUT: 
	I0729 16:54:51.174485    5947 main.go:141] libmachine: STDERR: 
	I0729 16:54:51.174535    5947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2 +20000M
	I0729 16:54:51.182439    5947 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:54:51.182455    5947 main.go:141] libmachine: STDERR: 
	I0729 16:54:51.182464    5947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:54:51.182469    5947 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:54:51.182480    5947 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:51.182510    5947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:ff:8e:b6:71:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:54:51.184159    5947 main.go:141] libmachine: STDOUT: 
	I0729 16:54:51.184174    5947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:51.184185    5947 client.go:171] duration metric: took 226.156583ms to LocalClient.Create
	I0729 16:54:53.186099    5947 start.go:128] duration metric: took 2.282456334s to createHost
	I0729 16:54:53.186171    5947 start.go:83] releasing machines lock for "old-k8s-version-356000", held for 2.282982833s
	W0729 16:54:53.186335    5947 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-356000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-356000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:53.195759    5947 out.go:177] 
	W0729 16:54:53.202645    5947 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:53.202669    5947 out.go:239] * 
	* 
	W0729 16:54:53.204473    5947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:54:53.215768    5947 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (53.329458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-356000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-356000 create -f testdata/busybox.yaml: exit status 1 (28.756125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-356000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-356000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (29.206167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (28.922208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-356000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-356000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-356000 describe deploy/metrics-server -n kube-system: exit status 1 (26.928167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-356000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-356000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (31.105208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.18794775s)

                                                
                                                
-- stdout --
	* [old-k8s-version-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-356000" primary control-plane node in "old-k8s-version-356000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-356000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-356000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:57.179197    6001 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:57.179347    6001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:57.179350    6001 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:57.179353    6001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:57.179496    6001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:54:57.180516    6001 out.go:298] Setting JSON to false
	I0729 16:54:57.196979    6001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3260,"bootTime":1722294037,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:54:57.197057    6001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:54:57.201487    6001 out.go:177] * [old-k8s-version-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:54:57.208414    6001 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:54:57.208503    6001 notify.go:220] Checking for updates...
	I0729 16:54:57.215372    6001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:54:57.218427    6001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:54:57.221407    6001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:54:57.224303    6001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:54:57.227413    6001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:54:57.230759    6001 config.go:182] Loaded profile config "old-k8s-version-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 16:54:57.234373    6001 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:54:57.237332    6001 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:54:57.241395    6001 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:54:57.248363    6001 start.go:297] selected driver: qemu2
	I0729 16:54:57.248369    6001 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:57.248421    6001 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:54:57.250736    6001 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:54:57.250774    6001 cni.go:84] Creating CNI manager for ""
	I0729 16:54:57.250781    6001 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:54:57.250808    6001 start.go:340] cluster config:
	{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-356000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:57.254144    6001 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:54:57.261387    6001 out.go:177] * Starting "old-k8s-version-356000" primary control-plane node in "old-k8s-version-356000" cluster
	I0729 16:54:57.265422    6001 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:54:57.265438    6001 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:54:57.265449    6001 cache.go:56] Caching tarball of preloaded images
	I0729 16:54:57.265502    6001 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:54:57.265508    6001 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:54:57.265570    6001 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/old-k8s-version-356000/config.json ...
	I0729 16:54:57.266027    6001 start.go:360] acquireMachinesLock for old-k8s-version-356000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:57.266053    6001 start.go:364] duration metric: took 20.375µs to acquireMachinesLock for "old-k8s-version-356000"
	I0729 16:54:57.266061    6001 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:54:57.266065    6001 fix.go:54] fixHost starting: 
	I0729 16:54:57.266173    6001 fix.go:112] recreateIfNeeded on old-k8s-version-356000: state=Stopped err=<nil>
	W0729 16:54:57.266181    6001 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:54:57.269369    6001 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-356000" ...
	I0729 16:54:57.277171    6001 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:57.277201    6001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:ff:8e:b6:71:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:54:57.279139    6001 main.go:141] libmachine: STDOUT: 
	I0729 16:54:57.279159    6001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:57.279184    6001 fix.go:56] duration metric: took 13.119417ms for fixHost
	I0729 16:54:57.279187    6001 start.go:83] releasing machines lock for "old-k8s-version-356000", held for 13.132208ms
	W0729 16:54:57.279194    6001 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:57.279222    6001 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:57.279226    6001 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:02.281158    6001 start.go:360] acquireMachinesLock for old-k8s-version-356000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:02.281610    6001 start.go:364] duration metric: took 356.708µs to acquireMachinesLock for "old-k8s-version-356000"
	I0729 16:55:02.281685    6001 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:02.281700    6001 fix.go:54] fixHost starting: 
	I0729 16:55:02.282202    6001 fix.go:112] recreateIfNeeded on old-k8s-version-356000: state=Stopped err=<nil>
	W0729 16:55:02.282220    6001 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:02.290733    6001 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-356000" ...
	I0729 16:55:02.294687    6001 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:02.294867    6001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:ff:8e:b6:71:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/old-k8s-version-356000/disk.qcow2
	I0729 16:55:02.302736    6001 main.go:141] libmachine: STDOUT: 
	I0729 16:55:02.302791    6001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:02.302861    6001 fix.go:56] duration metric: took 21.163834ms for fixHost
	I0729 16:55:02.302873    6001 start.go:83] releasing machines lock for "old-k8s-version-356000", held for 21.24925ms
	W0729 16:55:02.302994    6001 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:02.311639    6001 out.go:177] 
	W0729 16:55:02.315769    6001 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:02.315795    6001 out.go:239] * 
	* 
	W0729 16:55:02.317012    6001 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:02.326636    6001 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (56.657666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-356000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (31.84175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-356000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-356000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-356000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.106917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-356000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-356000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (28.422959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-356000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (29.098417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-356000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-356000 --alsologtostderr -v=1: exit status 83 (40.8895ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-356000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:02.583670    6024 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:02.584649    6024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:02.584657    6024 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:02.584660    6024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:02.584823    6024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:02.585043    6024 out.go:298] Setting JSON to false
	I0729 16:55:02.585051    6024 mustload.go:65] Loading cluster: old-k8s-version-356000
	I0729 16:55:02.585230    6024 config.go:182] Loaded profile config "old-k8s-version-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 16:55:02.589969    6024 out.go:177] * The control-plane node old-k8s-version-356000 host is not running: state=Stopped
	I0729 16:55:02.593008    6024 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-356000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-356000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (29.130792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (29.413917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-687000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-687000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.809728042s)

                                                
                                                
-- stdout --
	* [no-preload-687000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-687000" primary control-plane node in "no-preload-687000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:02.897329    6041 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:02.897455    6041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:02.897458    6041 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:02.897461    6041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:02.897593    6041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:02.898673    6041 out.go:298] Setting JSON to false
	I0729 16:55:02.914827    6041 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3265,"bootTime":1722294037,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:02.914897    6041 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:02.919627    6041 out.go:177] * [no-preload-687000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:02.926642    6041 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:02.926702    6041 notify.go:220] Checking for updates...
	I0729 16:55:02.934458    6041 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:02.938552    6041 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:02.941615    6041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:02.942976    6041 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:02.945566    6041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:02.948937    6041 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:02.949000    6041 config.go:182] Loaded profile config "stopped-upgrade-480000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:55:02.949050    6041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:02.953353    6041 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:55:02.960619    6041 start.go:297] selected driver: qemu2
	I0729 16:55:02.960626    6041 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:55:02.960632    6041 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:02.962744    6041 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:55:02.965542    6041 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:55:02.968623    6041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:02.968642    6041 cni.go:84] Creating CNI manager for ""
	I0729 16:55:02.968648    6041 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:02.968651    6041 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:55:02.968679    6041 start.go:340] cluster config:
	{Name:no-preload-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:02.972196    6041 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.978545    6041 out.go:177] * Starting "no-preload-687000" primary control-plane node in "no-preload-687000" cluster
	I0729 16:55:02.982563    6041 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:55:02.982631    6041 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/no-preload-687000/config.json ...
	I0729 16:55:02.982647    6041 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/no-preload-687000/config.json: {Name:mkab52cfd83dcfee4f34b68fabe571512ea52b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:55:02.982667    6041 cache.go:107] acquiring lock: {Name:mk9b7516d94ba00b6a4aa7e39cdfccbd9abc18a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.982686    6041 cache.go:107] acquiring lock: {Name:mke28d8bb287643611e1742f74c0a2702411da54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.982728    6041 cache.go:107] acquiring lock: {Name:mk353a6d6e3a11c433ef6f01a84187bc2bf09a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.982751    6041 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 16:55:02.982757    6041 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.417µs
	I0729 16:55:02.982761    6041 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 16:55:02.982775    6041 cache.go:107] acquiring lock: {Name:mk2a9e0af7c4ff7d99442d8f691344a7242706ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.982667    6041 cache.go:107] acquiring lock: {Name:mk5dfb16c2b32713424bf4f7fa30012d56659749 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.982859    6041 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 16:55:02.982862    6041 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 16:55:02.982894    6041 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 16:55:02.982878    6041 cache.go:107] acquiring lock: {Name:mk615afb92907de219fc62fda402c26b3383bb72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.982902    6041 cache.go:107] acquiring lock: {Name:mk574922717d3c4ee9fcfbe98e499dded2015316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.982935    6041 start.go:360] acquireMachinesLock for no-preload-687000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:02.983004    6041 cache.go:107] acquiring lock: {Name:mkc13d1da61393c3be1b47df52b90bd50821a28c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:02.983020    6041 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 16:55:02.983112    6041 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 16:55:02.983118    6041 start.go:364] duration metric: took 176.833µs to acquireMachinesLock for "no-preload-687000"
	I0729 16:55:02.983130    6041 start.go:93] Provisioning new machine with config: &{Name:no-preload-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:02.983169    6041 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:02.983236    6041 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 16:55:02.983268    6041 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 16:55:02.987572    6041 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:02.990209    6041 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 16:55:02.990362    6041 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 16:55:02.990418    6041 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 16:55:02.990456    6041 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 16:55:02.990481    6041 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 16:55:02.990496    6041 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 16:55:02.990523    6041 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 16:55:03.004172    6041 start.go:159] libmachine.API.Create for "no-preload-687000" (driver="qemu2")
	I0729 16:55:03.004201    6041 client.go:168] LocalClient.Create starting
	I0729 16:55:03.004270    6041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:03.004303    6041 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:03.004311    6041 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:03.004354    6041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:03.004377    6041 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:03.004385    6041 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:03.004771    6041 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:03.161984    6041 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:03.322179    6041 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:03.322194    6041 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:03.322386    6041 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:03.331964    6041 main.go:141] libmachine: STDOUT: 
	I0729 16:55:03.331979    6041 main.go:141] libmachine: STDERR: 
	I0729 16:55:03.332020    6041 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2 +20000M
	I0729 16:55:03.340440    6041 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:03.340458    6041 main.go:141] libmachine: STDERR: 
	I0729 16:55:03.340469    6041 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:03.340472    6041 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:03.340483    6041 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:03.340509    6041 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ae:35:bd:37:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:03.342375    6041 main.go:141] libmachine: STDOUT: 
	I0729 16:55:03.342390    6041 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:03.342406    6041 client.go:171] duration metric: took 338.221875ms to LocalClient.Create
	I0729 16:55:03.390279    6041 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 16:55:03.403004    6041 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 16:55:03.411391    6041 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 16:55:03.427592    6041 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 16:55:03.448908    6041 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 16:55:03.463662    6041 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 16:55:03.497032    6041 cache.go:162] opening:  /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 16:55:03.538468    6041 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 16:55:03.538489    6041 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 555.724709ms
	I0729 16:55:03.538501    6041 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 16:55:05.317621    6041 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 16:55:05.317666    6041 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.334974792s
	I0729 16:55:05.317694    6041 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 16:55:05.342518    6041 start.go:128] duration metric: took 2.359465125s to createHost
	I0729 16:55:05.342552    6041 start.go:83] releasing machines lock for "no-preload-687000", held for 2.359561042s
	W0729 16:55:05.342608    6041 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:05.353949    6041 out.go:177] * Deleting "no-preload-687000" in qemu2 ...
	W0729 16:55:05.380186    6041 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:05.380219    6041 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:06.565814    6041 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 16:55:06.565843    6041 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.583314125s
	I0729 16:55:06.565854    6041 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 16:55:06.593633    6041 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 16:55:06.593643    6041 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 3.611177375s
	I0729 16:55:06.593649    6041 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 16:55:07.530524    6041 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 16:55:07.530556    6041 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.54813825s
	I0729 16:55:07.530572    6041 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 16:55:07.621700    6041 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 16:55:07.621731    6041 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.6390995s
	I0729 16:55:07.621746    6041 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 16:55:10.380174    6041 start.go:360] acquireMachinesLock for no-preload-687000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:10.380417    6041 start.go:364] duration metric: took 210.791µs to acquireMachinesLock for "no-preload-687000"
	I0729 16:55:10.380452    6041 start.go:93] Provisioning new machine with config: &{Name:no-preload-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:10.380541    6041 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:10.388111    6041 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:10.423300    6041 start.go:159] libmachine.API.Create for "no-preload-687000" (driver="qemu2")
	I0729 16:55:10.423344    6041 client.go:168] LocalClient.Create starting
	I0729 16:55:10.423465    6041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:10.423526    6041 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:10.423551    6041 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:10.423619    6041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:10.423660    6041 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:10.423671    6041 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:10.424158    6041 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:10.581122    6041 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:10.619489    6041 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:10.619499    6041 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:10.619709    6041 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:10.629459    6041 main.go:141] libmachine: STDOUT: 
	I0729 16:55:10.629479    6041 main.go:141] libmachine: STDERR: 
	I0729 16:55:10.629538    6041 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2 +20000M
	I0729 16:55:10.637688    6041 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:10.637702    6041 main.go:141] libmachine: STDERR: 
	I0729 16:55:10.637722    6041 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:10.637726    6041 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:10.637741    6041 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:10.637770    6041 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:c2:2c:26:5f:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:10.639622    6041 main.go:141] libmachine: STDOUT: 
	I0729 16:55:10.639637    6041 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:10.639650    6041 client.go:171] duration metric: took 216.309542ms to LocalClient.Create
	I0729 16:55:11.480912    6041 cache.go:157] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 16:55:11.480969    6041 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.498622833s
	I0729 16:55:11.480987    6041 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 16:55:11.481040    6041 cache.go:87] Successfully saved all images to host disk.
	I0729 16:55:12.640764    6041 start.go:128] duration metric: took 2.260294125s to createHost
	I0729 16:55:12.640823    6041 start.go:83] releasing machines lock for "no-preload-687000", held for 2.2604865s
	W0729 16:55:12.641104    6041 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:12.652724    6041 out.go:177] 
	W0729 16:55:12.656761    6041 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:12.656798    6041 out.go:239] * 
	* 
	W0729 16:55:12.658052    6041 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:12.667648    6041 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-687000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (49.668167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-687000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-687000 create -f testdata/busybox.yaml: exit status 1 (28.936584ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-687000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-687000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (30.488958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (29.574375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-687000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-687000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-687000 describe deploy/metrics-server -n kube-system: exit status 1 (28.329084ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-687000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-687000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (30.565542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-958000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-958000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.910630542s)

                                                
                                                
-- stdout --
	* [embed-certs-958000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-958000" primary control-plane node in "embed-certs-958000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-958000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:14.934374    6111 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:14.934501    6111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:14.934504    6111 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:14.934506    6111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:14.934647    6111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:14.935816    6111 out.go:298] Setting JSON to false
	I0729 16:55:14.951811    6111 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3277,"bootTime":1722294037,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:14.951870    6111 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:14.956737    6111 out.go:177] * [embed-certs-958000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:14.963594    6111 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:14.963610    6111 notify.go:220] Checking for updates...
	I0729 16:55:14.970832    6111 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:14.976801    6111 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:14.980750    6111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:14.983780    6111 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:14.986822    6111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:14.990122    6111 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:14.990197    6111 config.go:182] Loaded profile config "no-preload-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:55:14.990243    6111 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:14.993781    6111 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:55:15.000777    6111 start.go:297] selected driver: qemu2
	I0729 16:55:15.000783    6111 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:55:15.000789    6111 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:15.003103    6111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:55:15.011794    6111 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:55:15.015065    6111 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:15.015104    6111 cni.go:84] Creating CNI manager for ""
	I0729 16:55:15.015112    6111 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:15.015118    6111 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:55:15.015143    6111 start.go:340] cluster config:
	{Name:embed-certs-958000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:15.018971    6111 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:15.026814    6111 out.go:177] * Starting "embed-certs-958000" primary control-plane node in "embed-certs-958000" cluster
	I0729 16:55:15.030793    6111 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:55:15.030810    6111 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:55:15.030823    6111 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:15.030896    6111 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:15.030902    6111 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:55:15.030981    6111 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/embed-certs-958000/config.json ...
	I0729 16:55:15.030999    6111 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/embed-certs-958000/config.json: {Name:mk13dcd2171ecb3b519f7893cde66cfb16dfc51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:55:15.031219    6111 start.go:360] acquireMachinesLock for embed-certs-958000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:15.031255    6111 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "embed-certs-958000"
	I0729 16:55:15.031267    6111 start.go:93] Provisioning new machine with config: &{Name:embed-certs-958000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:15.031297    6111 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:15.034717    6111 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:15.052785    6111 start.go:159] libmachine.API.Create for "embed-certs-958000" (driver="qemu2")
	I0729 16:55:15.052813    6111 client.go:168] LocalClient.Create starting
	I0729 16:55:15.052874    6111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:15.052904    6111 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:15.052915    6111 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:15.052953    6111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:15.052976    6111 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:15.052985    6111 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:15.053386    6111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:15.206840    6111 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:15.322792    6111 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:15.322799    6111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:15.322977    6111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:15.332031    6111 main.go:141] libmachine: STDOUT: 
	I0729 16:55:15.332048    6111 main.go:141] libmachine: STDERR: 
	I0729 16:55:15.332111    6111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2 +20000M
	I0729 16:55:15.339930    6111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:15.339945    6111 main.go:141] libmachine: STDERR: 
	I0729 16:55:15.339957    6111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:15.339963    6111 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:15.339975    6111 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:15.340004    6111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:d3:7f:63:4f:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:15.341642    6111 main.go:141] libmachine: STDOUT: 
	I0729 16:55:15.341655    6111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:15.341673    6111 client.go:171] duration metric: took 288.8655ms to LocalClient.Create
	I0729 16:55:17.343791    6111 start.go:128] duration metric: took 2.312552417s to createHost
	I0729 16:55:17.343878    6111 start.go:83] releasing machines lock for "embed-certs-958000", held for 2.312697833s
	W0729 16:55:17.343932    6111 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:17.359375    6111 out.go:177] * Deleting "embed-certs-958000" in qemu2 ...
	W0729 16:55:17.389662    6111 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:17.389685    6111 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:22.389713    6111 start.go:360] acquireMachinesLock for embed-certs-958000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:22.407721    6111 start.go:364] duration metric: took 17.948459ms to acquireMachinesLock for "embed-certs-958000"
	I0729 16:55:22.407811    6111 start.go:93] Provisioning new machine with config: &{Name:embed-certs-958000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:22.408015    6111 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:22.417699    6111 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:22.464450    6111 start.go:159] libmachine.API.Create for "embed-certs-958000" (driver="qemu2")
	I0729 16:55:22.464508    6111 client.go:168] LocalClient.Create starting
	I0729 16:55:22.464678    6111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:22.464748    6111 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:22.464764    6111 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:22.464830    6111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:22.464878    6111 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:22.464894    6111 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:22.465376    6111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:22.631694    6111 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:22.753334    6111 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:22.753342    6111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:22.753513    6111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:22.762833    6111 main.go:141] libmachine: STDOUT: 
	I0729 16:55:22.762857    6111 main.go:141] libmachine: STDERR: 
	I0729 16:55:22.762938    6111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2 +20000M
	I0729 16:55:22.772290    6111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:22.772309    6111 main.go:141] libmachine: STDERR: 
	I0729 16:55:22.772331    6111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:22.772339    6111 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:22.772356    6111 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:22.772402    6111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:cf:1a:cb:e4:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:22.774807    6111 main.go:141] libmachine: STDOUT: 
	I0729 16:55:22.774826    6111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:22.774840    6111 client.go:171] duration metric: took 310.323792ms to LocalClient.Create
	I0729 16:55:24.776982    6111 start.go:128] duration metric: took 2.369012625s to createHost
	I0729 16:55:24.777063    6111 start.go:83] releasing machines lock for "embed-certs-958000", held for 2.3693905s
	W0729 16:55:24.777414    6111 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:24.791040    6111 out.go:177] 
	W0729 16:55:24.795076    6111 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:24.795133    6111 out.go:239] * 
	* 
	W0729 16:55:24.797342    6111 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:24.805963    6111 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-958000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (49.066375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-687000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-687000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.979038042s)

                                                
                                                
-- stdout --
	* [no-preload-687000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-687000" primary control-plane node in "no-preload-687000" cluster
	* Restarting existing qemu2 VM for "no-preload-687000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-687000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:16.489063    6133 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:16.489194    6133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:16.489198    6133 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:16.489200    6133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:16.489300    6133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:16.490326    6133 out.go:298] Setting JSON to false
	I0729 16:55:16.506225    6133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3279,"bootTime":1722294037,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:16.506290    6133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:16.509943    6133 out.go:177] * [no-preload-687000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:16.516858    6133 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:16.516949    6133 notify.go:220] Checking for updates...
	I0729 16:55:16.523924    6133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:16.526953    6133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:16.529924    6133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:16.532886    6133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:16.535846    6133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:16.539173    6133 config.go:182] Loaded profile config "no-preload-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:55:16.539434    6133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:16.543874    6133 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:55:16.550849    6133 start.go:297] selected driver: qemu2
	I0729 16:55:16.550855    6133 start.go:901] validating driver "qemu2" against &{Name:no-preload-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:16.550917    6133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:16.553192    6133 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:16.553214    6133 cni.go:84] Creating CNI manager for ""
	I0729 16:55:16.553227    6133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:16.553251    6133 start.go:340] cluster config:
	{Name:no-preload-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-687000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:16.556818    6133 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.563859    6133 out.go:177] * Starting "no-preload-687000" primary control-plane node in "no-preload-687000" cluster
	I0729 16:55:16.567880    6133 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:55:16.567954    6133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/no-preload-687000/config.json ...
	I0729 16:55:16.567977    6133 cache.go:107] acquiring lock: {Name:mk9b7516d94ba00b6a4aa7e39cdfccbd9abc18a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.567984    6133 cache.go:107] acquiring lock: {Name:mke28d8bb287643611e1742f74c0a2702411da54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.568001    6133 cache.go:107] acquiring lock: {Name:mk5dfb16c2b32713424bf4f7fa30012d56659749 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.568043    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 16:55:16.568052    6133 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.125µs
	I0729 16:55:16.568058    6133 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 16:55:16.568057    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 16:55:16.568066    6133 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 87.417µs
	I0729 16:55:16.568093    6133 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 16:55:16.568093    6133 cache.go:107] acquiring lock: {Name:mkc13d1da61393c3be1b47df52b90bd50821a28c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.568067    6133 cache.go:107] acquiring lock: {Name:mk2a9e0af7c4ff7d99442d8f691344a7242706ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.568075    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 16:55:16.568133    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 16:55:16.568140    6133 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 46.75µs
	I0729 16:55:16.568143    6133 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 16:55:16.568147    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 16:55:16.568076    6133 cache.go:107] acquiring lock: {Name:mk574922717d3c4ee9fcfbe98e499dded2015316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.568190    6133 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 123µs
	I0729 16:55:16.568193    6133 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 16:55:16.568087    6133 cache.go:107] acquiring lock: {Name:mk353a6d6e3a11c433ef6f01a84187bc2bf09a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.568148    6133 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 146.542µs
	I0729 16:55:16.568223    6133 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 16:55:16.568169    6133 cache.go:107] acquiring lock: {Name:mk615afb92907de219fc62fda402c26b3383bb72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:16.568225    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 16:55:16.568242    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 16:55:16.568244    6133 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 168.5µs
	I0729 16:55:16.568246    6133 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 165.5µs
	I0729 16:55:16.568256    6133 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 16:55:16.568248    6133 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 16:55:16.568262    6133 cache.go:115] /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 16:55:16.568267    6133 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 136.917µs
	I0729 16:55:16.568270    6133 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 16:55:16.568275    6133 cache.go:87] Successfully saved all images to host disk.
	I0729 16:55:16.568383    6133 start.go:360] acquireMachinesLock for no-preload-687000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:17.343991    6133 start.go:364] duration metric: took 775.615292ms to acquireMachinesLock for "no-preload-687000"
	I0729 16:55:17.344197    6133 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:17.344227    6133 fix.go:54] fixHost starting: 
	I0729 16:55:17.344873    6133 fix.go:112] recreateIfNeeded on no-preload-687000: state=Stopped err=<nil>
	W0729 16:55:17.344919    6133 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:17.350593    6133 out.go:177] * Restarting existing qemu2 VM for "no-preload-687000" ...
	I0729 16:55:17.363430    6133 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:17.363636    6133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:c2:2c:26:5f:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:17.374069    6133 main.go:141] libmachine: STDOUT: 
	I0729 16:55:17.374149    6133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:17.374245    6133 fix.go:56] duration metric: took 30.022708ms for fixHost
	I0729 16:55:17.374262    6133 start.go:83] releasing machines lock for "no-preload-687000", held for 30.243917ms
	W0729 16:55:17.374294    6133 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:17.374444    6133 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:17.374468    6133 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:22.376517    6133 start.go:360] acquireMachinesLock for no-preload-687000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:22.376922    6133 start.go:364] duration metric: took 328.166µs to acquireMachinesLock for "no-preload-687000"
	I0729 16:55:22.377078    6133 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:22.377098    6133 fix.go:54] fixHost starting: 
	I0729 16:55:22.377863    6133 fix.go:112] recreateIfNeeded on no-preload-687000: state=Stopped err=<nil>
	W0729 16:55:22.377891    6133 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:22.389497    6133 out.go:177] * Restarting existing qemu2 VM for "no-preload-687000" ...
	I0729 16:55:22.397343    6133 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:22.397645    6133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:c2:2c:26:5f:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/no-preload-687000/disk.qcow2
	I0729 16:55:22.407481    6133 main.go:141] libmachine: STDOUT: 
	I0729 16:55:22.407542    6133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:22.407616    6133 fix.go:56] duration metric: took 30.520667ms for fixHost
	I0729 16:55:22.407632    6133 start.go:83] releasing machines lock for "no-preload-687000", held for 30.688167ms
	W0729 16:55:22.407832    6133 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-687000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-687000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:22.417729    6133 out.go:177] 
	W0729 16:55:22.421430    6133 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:22.421462    6133 out.go:239] * 
	* 
	W0729 16:55:22.424507    6133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:22.431305    6133 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-687000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (50.138792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-687000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (34.006583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-687000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-687000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-687000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.847917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-687000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-687000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (33.11875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-687000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (30.093667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-687000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-687000 --alsologtostderr -v=1: exit status 83 (42.365041ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-687000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-687000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:22.708323    6153 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:22.708470    6153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:22.708476    6153 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:22.708479    6153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:22.708618    6153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:22.708850    6153 out.go:298] Setting JSON to false
	I0729 16:55:22.708856    6153 mustload.go:65] Loading cluster: no-preload-687000
	I0729 16:55:22.709042    6153 config.go:182] Loaded profile config "no-preload-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:55:22.713334    6153 out.go:177] * The control-plane node no-preload-687000 host is not running: state=Stopped
	I0729 16:55:22.717318    6153 out.go:177]   To start a cluster, run: "minikube start -p no-preload-687000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-687000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (29.482584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (28.9065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.408414208s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-321000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:23.117291    6180 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:23.117418    6180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:23.117421    6180 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:23.117424    6180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:23.117554    6180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:23.118585    6180 out.go:298] Setting JSON to false
	I0729 16:55:23.134852    6180 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3286,"bootTime":1722294037,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:23.134914    6180 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:23.139400    6180 out.go:177] * [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:23.146331    6180 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:23.146397    6180 notify.go:220] Checking for updates...
	I0729 16:55:23.152389    6180 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:23.155320    6180 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:23.158342    6180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:23.161381    6180 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:23.164271    6180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:23.167714    6180 config.go:182] Loaded profile config "embed-certs-958000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:23.167790    6180 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:23.167835    6180 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:23.172310    6180 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:55:23.179308    6180 start.go:297] selected driver: qemu2
	I0729 16:55:23.179315    6180 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:55:23.179321    6180 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:23.181619    6180 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:55:23.184331    6180 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:55:23.185950    6180 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:23.185965    6180 cni.go:84] Creating CNI manager for ""
	I0729 16:55:23.185970    6180 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:23.185974    6180 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:55:23.186005    6180 start.go:340] cluster config:
	{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:23.189775    6180 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:23.197362    6180 out.go:177] * Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	I0729 16:55:23.201339    6180 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:55:23.201356    6180 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:55:23.201372    6180 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:23.201439    6180 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:23.201445    6180 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:55:23.201510    6180 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/default-k8s-diff-port-321000/config.json ...
	I0729 16:55:23.201525    6180 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/default-k8s-diff-port-321000/config.json: {Name:mk92c175264c38cd22eaf72471e177d0fa701300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:55:23.201740    6180 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:24.777276    6180 start.go:364] duration metric: took 1.575539s to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0729 16:55:24.777439    6180 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:24.777677    6180 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:24.787012    6180 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:24.835698    6180 start.go:159] libmachine.API.Create for "default-k8s-diff-port-321000" (driver="qemu2")
	I0729 16:55:24.835748    6180 client.go:168] LocalClient.Create starting
	I0729 16:55:24.835865    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:24.835921    6180 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:24.835941    6180 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:24.836016    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:24.836060    6180 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:24.836076    6180 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:24.836667    6180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:24.998910    6180 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:25.119319    6180 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:25.119331    6180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:25.119536    6180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:25.129290    6180 main.go:141] libmachine: STDOUT: 
	I0729 16:55:25.129309    6180 main.go:141] libmachine: STDERR: 
	I0729 16:55:25.129365    6180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2 +20000M
	I0729 16:55:25.138590    6180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:25.138610    6180 main.go:141] libmachine: STDERR: 
	I0729 16:55:25.138627    6180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:25.138633    6180 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:25.138647    6180 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:25.138674    6180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ae:b6:48:e4:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:25.140408    6180 main.go:141] libmachine: STDOUT: 
	I0729 16:55:25.140423    6180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:25.140442    6180 client.go:171] duration metric: took 304.695458ms to LocalClient.Create
	I0729 16:55:27.142441    6180 start.go:128] duration metric: took 2.364783458s to createHost
	I0729 16:55:27.142461    6180 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 2.365217083s
	W0729 16:55:27.142471    6180 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:27.150630    6180 out.go:177] * Deleting "default-k8s-diff-port-321000" in qemu2 ...
	W0729 16:55:27.164561    6180 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:27.164574    6180 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:32.166597    6180 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:32.166948    6180 start.go:364] duration metric: took 277.292µs to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0729 16:55:32.167108    6180 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:32.167371    6180 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:32.172187    6180 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:32.221176    6180 start.go:159] libmachine.API.Create for "default-k8s-diff-port-321000" (driver="qemu2")
	I0729 16:55:32.221217    6180 client.go:168] LocalClient.Create starting
	I0729 16:55:32.221329    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:32.221391    6180 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:32.221417    6180 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:32.221475    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:32.221520    6180 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:32.221530    6180 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:32.222094    6180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:32.384690    6180 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:32.420877    6180 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:32.420882    6180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:32.421058    6180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:32.430524    6180 main.go:141] libmachine: STDOUT: 
	I0729 16:55:32.430541    6180 main.go:141] libmachine: STDERR: 
	I0729 16:55:32.430584    6180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2 +20000M
	I0729 16:55:32.438427    6180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:32.438441    6180 main.go:141] libmachine: STDERR: 
	I0729 16:55:32.438453    6180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:32.438463    6180 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:32.438473    6180 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:32.438496    6180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b8:e7:06:ae:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:32.440133    6180 main.go:141] libmachine: STDOUT: 
	I0729 16:55:32.440153    6180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:32.440166    6180 client.go:171] duration metric: took 218.949542ms to LocalClient.Create
	I0729 16:55:34.440638    6180 start.go:128] duration metric: took 2.273277083s to createHost
	I0729 16:55:34.440700    6180 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 2.273786625s
	W0729 16:55:34.440972    6180 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:34.446783    6180 out.go:177] 
	W0729 16:55:34.464775    6180 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:34.464809    6180 out.go:239] * 
	* 
	W0729 16:55:34.467868    6180 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:34.479706    6180 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (61.95825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-958000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-958000 create -f testdata/busybox.yaml: exit status 1 (31.069542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-958000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-958000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (33.621625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (33.006125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-958000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-958000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-958000 describe deploy/metrics-server -n kube-system: exit status 1 (28.057417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-958000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-958000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (29.403708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-958000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-958000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (7.35064225s)

                                                
                                                
-- stdout --
	* [embed-certs-958000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-958000" primary control-plane node in "embed-certs-958000" cluster
	* Restarting existing qemu2 VM for "embed-certs-958000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-958000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:27.188687    6218 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:27.188834    6218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:27.188837    6218 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:27.188839    6218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:27.188974    6218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:27.190013    6218 out.go:298] Setting JSON to false
	I0729 16:55:27.206083    6218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3290,"bootTime":1722294037,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:27.206185    6218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:27.210679    6218 out.go:177] * [embed-certs-958000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:27.226851    6218 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:27.226878    6218 notify.go:220] Checking for updates...
	I0729 16:55:27.232649    6218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:27.236670    6218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:27.238131    6218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:27.241687    6218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:27.244691    6218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:27.248027    6218 config.go:182] Loaded profile config "embed-certs-958000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:27.248328    6218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:27.252619    6218 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:55:27.259711    6218 start.go:297] selected driver: qemu2
	I0729 16:55:27.259718    6218 start.go:901] validating driver "qemu2" against &{Name:embed-certs-958000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:27.259786    6218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:27.262063    6218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:27.262088    6218 cni.go:84] Creating CNI manager for ""
	I0729 16:55:27.262096    6218 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:27.262126    6218 start.go:340] cluster config:
	{Name:embed-certs-958000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-958000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:27.265622    6218 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:27.273712    6218 out.go:177] * Starting "embed-certs-958000" primary control-plane node in "embed-certs-958000" cluster
	I0729 16:55:27.277576    6218 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:55:27.277591    6218 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:55:27.277601    6218 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:27.277660    6218 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:27.277665    6218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:55:27.277708    6218 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/embed-certs-958000/config.json ...
	I0729 16:55:27.278256    6218 start.go:360] acquireMachinesLock for embed-certs-958000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:27.278297    6218 start.go:364] duration metric: took 34.458µs to acquireMachinesLock for "embed-certs-958000"
	I0729 16:55:27.278307    6218 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:27.278313    6218 fix.go:54] fixHost starting: 
	I0729 16:55:27.278433    6218 fix.go:112] recreateIfNeeded on embed-certs-958000: state=Stopped err=<nil>
	W0729 16:55:27.278442    6218 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:27.285622    6218 out.go:177] * Restarting existing qemu2 VM for "embed-certs-958000" ...
	I0729 16:55:27.289694    6218 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:27.289743    6218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:cf:1a:cb:e4:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:27.291922    6218 main.go:141] libmachine: STDOUT: 
	I0729 16:55:27.291945    6218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:27.291972    6218 fix.go:56] duration metric: took 13.659417ms for fixHost
	I0729 16:55:27.291977    6218 start.go:83] releasing machines lock for "embed-certs-958000", held for 13.675375ms
	W0729 16:55:27.291984    6218 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:27.292017    6218 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:27.292022    6218 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:32.294006    6218 start.go:360] acquireMachinesLock for embed-certs-958000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:34.440829    6218 start.go:364] duration metric: took 2.146825708s to acquireMachinesLock for "embed-certs-958000"
	I0729 16:55:34.441027    6218 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:34.441045    6218 fix.go:54] fixHost starting: 
	I0729 16:55:34.441767    6218 fix.go:112] recreateIfNeeded on embed-certs-958000: state=Stopped err=<nil>
	W0729 16:55:34.441792    6218 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:34.460751    6218 out.go:177] * Restarting existing qemu2 VM for "embed-certs-958000" ...
	I0729 16:55:34.468770    6218 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:34.469002    6218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:cf:1a:cb:e4:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/embed-certs-958000/disk.qcow2
	I0729 16:55:34.478606    6218 main.go:141] libmachine: STDOUT: 
	I0729 16:55:34.478667    6218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:34.478742    6218 fix.go:56] duration metric: took 37.700583ms for fixHost
	I0729 16:55:34.478760    6218 start.go:83] releasing machines lock for "embed-certs-958000", held for 37.867708ms
	W0729 16:55:34.478963    6218 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-958000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-958000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:34.490723    6218 out.go:177] 
	W0729 16:55:34.494857    6218 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:34.494884    6218 out.go:239] * 
	* 
	W0729 16:55:34.496824    6218 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:34.501829    6218 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-958000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (55.524042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-321000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321000 create -f testdata/busybox.yaml: exit status 1 (31.032167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-321000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-321000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (29.521083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (33.6965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-958000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (32.92675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-958000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-958000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-958000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.120916ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-958000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-958000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (30.466042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-321000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-321000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321000 describe deploy/metrics-server -n kube-system: exit status 1 (29.155458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-321000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-321000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (35.751583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-958000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (29.460125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-958000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-958000 --alsologtostderr -v=1: exit status 83 (45.019667ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-958000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-958000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:34.770707    6251 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:34.770871    6251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:34.770874    6251 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:34.770877    6251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:34.771012    6251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:34.771242    6251 out.go:298] Setting JSON to false
	I0729 16:55:34.771248    6251 mustload.go:65] Loading cluster: embed-certs-958000
	I0729 16:55:34.771454    6251 config.go:182] Loaded profile config "embed-certs-958000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:34.775471    6251 out.go:177] * The control-plane node embed-certs-958000 host is not running: state=Stopped
	I0729 16:55:34.781265    6251 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-958000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-958000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (37.569834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (27.005416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-512000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-512000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.866860542s)

                                                
                                                
-- stdout --
	* [newest-cni-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-512000" primary control-plane node in "newest-cni-512000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:35.093247    6274 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:35.093440    6274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:35.093443    6274 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:35.093445    6274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:35.093586    6274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:35.094720    6274 out.go:298] Setting JSON to false
	I0729 16:55:35.110724    6274 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3298,"bootTime":1722294037,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:35.110843    6274 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:35.116483    6274 out.go:177] * [newest-cni-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:35.124499    6274 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:35.124550    6274 notify.go:220] Checking for updates...
	I0729 16:55:35.130472    6274 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:35.133379    6274 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:35.136491    6274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:35.139499    6274 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:35.142439    6274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:35.145762    6274 config.go:182] Loaded profile config "default-k8s-diff-port-321000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:35.145829    6274 config.go:182] Loaded profile config "multinode-100000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:35.145875    6274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:35.150472    6274 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:55:35.157484    6274 start.go:297] selected driver: qemu2
	I0729 16:55:35.157493    6274 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:55:35.157502    6274 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:35.159753    6274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 16:55:35.159774    6274 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 16:55:35.168412    6274 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:55:35.171569    6274 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 16:55:35.171587    6274 cni.go:84] Creating CNI manager for ""
	I0729 16:55:35.171601    6274 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:35.171609    6274 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:55:35.171639    6274 start.go:340] cluster config:
	{Name:newest-cni-512000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-512000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:35.175381    6274 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:35.181450    6274 out.go:177] * Starting "newest-cni-512000" primary control-plane node in "newest-cni-512000" cluster
	I0729 16:55:35.185478    6274 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:55:35.185494    6274 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:55:35.185503    6274 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:35.185572    6274 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:35.185578    6274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 16:55:35.185651    6274 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/newest-cni-512000/config.json ...
	I0729 16:55:35.185663    6274 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/newest-cni-512000/config.json: {Name:mkec81c90333521f27724700d42ae28d97a6789d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:55:35.185881    6274 start.go:360] acquireMachinesLock for newest-cni-512000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:35.185916    6274 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "newest-cni-512000"
	I0729 16:55:35.185929    6274 start.go:93] Provisioning new machine with config: &{Name:newest-cni-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-512000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:35.185959    6274 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:35.194390    6274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:35.212658    6274 start.go:159] libmachine.API.Create for "newest-cni-512000" (driver="qemu2")
	I0729 16:55:35.212681    6274 client.go:168] LocalClient.Create starting
	I0729 16:55:35.212745    6274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:35.212775    6274 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:35.212788    6274 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:35.212824    6274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:35.212849    6274 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:35.212855    6274 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:35.213202    6274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:35.367474    6274 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:35.448286    6274 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:35.448292    6274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:35.448463    6274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:35.457768    6274 main.go:141] libmachine: STDOUT: 
	I0729 16:55:35.457784    6274 main.go:141] libmachine: STDERR: 
	I0729 16:55:35.457832    6274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2 +20000M
	I0729 16:55:35.465588    6274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:35.465603    6274 main.go:141] libmachine: STDERR: 
	I0729 16:55:35.465617    6274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:35.465623    6274 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:35.465635    6274 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:35.465664    6274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b9:ac:4b:20:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:35.467304    6274 main.go:141] libmachine: STDOUT: 
	I0729 16:55:35.467321    6274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:35.467340    6274 client.go:171] duration metric: took 254.660417ms to LocalClient.Create
	I0729 16:55:37.469525    6274 start.go:128] duration metric: took 2.283599584s to createHost
	I0729 16:55:37.469597    6274 start.go:83] releasing machines lock for "newest-cni-512000", held for 2.283725875s
	W0729 16:55:37.469648    6274 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:37.485819    6274 out.go:177] * Deleting "newest-cni-512000" in qemu2 ...
	W0729 16:55:37.518202    6274 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:37.518234    6274 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:42.520337    6274 start.go:360] acquireMachinesLock for newest-cni-512000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:42.530203    6274 start.go:364] duration metric: took 9.790625ms to acquireMachinesLock for "newest-cni-512000"
	I0729 16:55:42.530247    6274 start.go:93] Provisioning new machine with config: &{Name:newest-cni-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-512000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:42.530495    6274 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:42.542651    6274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:42.587316    6274 start.go:159] libmachine.API.Create for "newest-cni-512000" (driver="qemu2")
	I0729 16:55:42.587367    6274 client.go:168] LocalClient.Create starting
	I0729 16:55:42.587463    6274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/ca.pem
	I0729 16:55:42.587522    6274 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:42.587546    6274 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:42.587608    6274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19347-923/.minikube/certs/cert.pem
	I0729 16:55:42.587651    6274 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:42.587666    6274 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:42.588166    6274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19347-923/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:42.752519    6274 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:42.869764    6274 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:42.869774    6274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:42.869943    6274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2.raw /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:42.879839    6274 main.go:141] libmachine: STDOUT: 
	I0729 16:55:42.879863    6274 main.go:141] libmachine: STDERR: 
	I0729 16:55:42.879939    6274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2 +20000M
	I0729 16:55:42.889222    6274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:42.889244    6274 main.go:141] libmachine: STDERR: 
	I0729 16:55:42.889257    6274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:42.889261    6274 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:42.889280    6274 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:42.889315    6274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:2b:d9:2f:6a:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:42.891062    6274 main.go:141] libmachine: STDOUT: 
	I0729 16:55:42.891078    6274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:42.891091    6274 client.go:171] duration metric: took 303.727ms to LocalClient.Create
	I0729 16:55:44.893326    6274 start.go:128] duration metric: took 2.362769375s to createHost
	I0729 16:55:44.893388    6274 start.go:83] releasing machines lock for "newest-cni-512000", held for 2.363216958s
	W0729 16:55:44.893699    6274 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:44.903423    6274 out.go:177] 
	W0729 16:55:44.907586    6274 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:44.907634    6274 out.go:239] * 
	* 
	W0729 16:55:44.910075    6274 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:44.923396    6274 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-512000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000: exit status 7 (64.100458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.761345375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:36.834446    6305 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:36.834572    6305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:36.834576    6305 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:36.834578    6305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:36.834699    6305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:36.835749    6305 out.go:298] Setting JSON to false
	I0729 16:55:36.851356    6305 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3299,"bootTime":1722294037,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:36.851422    6305 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:36.856098    6305 out.go:177] * [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:36.864080    6305 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:36.864135    6305 notify.go:220] Checking for updates...
	I0729 16:55:36.872038    6305 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:36.876056    6305 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:36.878999    6305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:36.882087    6305 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:36.885073    6305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:36.888305    6305 config.go:182] Loaded profile config "default-k8s-diff-port-321000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:36.888600    6305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:36.893019    6305 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:55:36.900037    6305 start.go:297] selected driver: qemu2
	I0729 16:55:36.900046    6305 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:36.900097    6305 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:36.902435    6305 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:36.902470    6305 cni.go:84] Creating CNI manager for ""
	I0729 16:55:36.902481    6305 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:36.902504    6305 start.go:340] cluster config:
	{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:36.906293    6305 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:36.914029    6305 out.go:177] * Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	I0729 16:55:36.918033    6305 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:55:36.918052    6305 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:55:36.918059    6305 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:36.918111    6305 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:36.918116    6305 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:55:36.918173    6305 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/default-k8s-diff-port-321000/config.json ...
	I0729 16:55:36.918663    6305 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:37.469742    6305 start.go:364] duration metric: took 551.071625ms to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0729 16:55:37.469927    6305 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:37.469970    6305 fix.go:54] fixHost starting: 
	I0729 16:55:37.470636    6305 fix.go:112] recreateIfNeeded on default-k8s-diff-port-321000: state=Stopped err=<nil>
	W0729 16:55:37.470689    6305 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:37.476811    6305 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	I0729 16:55:37.490771    6305 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:37.490969    6305 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b8:e7:06:ae:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:37.501478    6305 main.go:141] libmachine: STDOUT: 
	I0729 16:55:37.501546    6305 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:37.501673    6305 fix.go:56] duration metric: took 31.706542ms for fixHost
	I0729 16:55:37.501695    6305 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 31.918959ms
	W0729 16:55:37.501725    6305 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:37.501876    6305 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:37.501893    6305 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:42.504057    6305 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:42.504494    6305 start.go:364] duration metric: took 331.625µs to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0729 16:55:42.504633    6305 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:42.504652    6305 fix.go:54] fixHost starting: 
	I0729 16:55:42.505378    6305 fix.go:112] recreateIfNeeded on default-k8s-diff-port-321000: state=Stopped err=<nil>
	W0729 16:55:42.505407    6305 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:42.515781    6305 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	I0729 16:55:42.519816    6305 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:42.520030    6305 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b8:e7:06:ae:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0729 16:55:42.529987    6305 main.go:141] libmachine: STDOUT: 
	I0729 16:55:42.530041    6305 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:42.530120    6305 fix.go:56] duration metric: took 25.471083ms for fixHost
	I0729 16:55:42.530142    6305 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 25.627083ms
	W0729 16:55:42.530331    6305 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:42.542646    6305 out.go:177] 
	W0729 16:55:42.546868    6305 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:42.546896    6305 out.go:239] * 
	* 
	W0729 16:55:42.548899    6305 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:42.558802    6305 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (49.186417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-321000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (33.034458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-321000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-321000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.565334ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-321000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-321000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (32.75175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-321000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (29.491209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-321000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-321000 --alsologtostderr -v=1: exit status 83 (42.981334ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-321000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-321000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:42.822891    6325 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:42.823049    6325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:42.823057    6325 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:42.823059    6325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:42.823220    6325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:42.823438    6325 out.go:298] Setting JSON to false
	I0729 16:55:42.823444    6325 mustload.go:65] Loading cluster: default-k8s-diff-port-321000
	I0729 16:55:42.823636    6325 config.go:182] Loaded profile config "default-k8s-diff-port-321000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:42.827739    6325 out.go:177] * The control-plane node default-k8s-diff-port-321000 host is not running: state=Stopped
	I0729 16:55:42.831779    6325 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-321000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-321000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (28.953542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (28.959542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-512000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-512000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.184346583s)

                                                
                                                
-- stdout --
	* [newest-cni-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-512000" primary control-plane node in "newest-cni-512000" cluster
	* Restarting existing qemu2 VM for "newest-cni-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:47.241838    6370 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:47.241958    6370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:47.241962    6370 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:47.241964    6370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:47.242091    6370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:47.243093    6370 out.go:298] Setting JSON to false
	I0729 16:55:47.259338    6370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3310,"bootTime":1722294037,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:55:47.259406    6370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:47.264261    6370 out.go:177] * [newest-cni-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:47.271232    6370 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:55:47.271286    6370 notify.go:220] Checking for updates...
	I0729 16:55:47.278235    6370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:55:47.281283    6370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:47.284250    6370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:47.287202    6370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:55:47.290251    6370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:47.293437    6370 config.go:182] Loaded profile config "newest-cni-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:55:47.293685    6370 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:47.298199    6370 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:55:47.304135    6370 start.go:297] selected driver: qemu2
	I0729 16:55:47.304144    6370 start.go:901] validating driver "qemu2" against &{Name:newest-cni-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-512000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:47.304214    6370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:47.306464    6370 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 16:55:47.306490    6370 cni.go:84] Creating CNI manager for ""
	I0729 16:55:47.306497    6370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:55:47.306516    6370 start.go:340] cluster config:
	{Name:newest-cni-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-512000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:47.309991    6370 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:47.317181    6370 out.go:177] * Starting "newest-cni-512000" primary control-plane node in "newest-cni-512000" cluster
	I0729 16:55:47.321230    6370 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:55:47.321245    6370 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:55:47.321256    6370 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:47.321317    6370 preload.go:172] Found /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:47.321323    6370 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 16:55:47.321393    6370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/newest-cni-512000/config.json ...
	I0729 16:55:47.321840    6370 start.go:360] acquireMachinesLock for newest-cni-512000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:47.321872    6370 start.go:364] duration metric: took 26.417µs to acquireMachinesLock for "newest-cni-512000"
	I0729 16:55:47.321882    6370 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:47.321886    6370 fix.go:54] fixHost starting: 
	I0729 16:55:47.322003    6370 fix.go:112] recreateIfNeeded on newest-cni-512000: state=Stopped err=<nil>
	W0729 16:55:47.322013    6370 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:47.325316    6370 out.go:177] * Restarting existing qemu2 VM for "newest-cni-512000" ...
	I0729 16:55:47.333240    6370 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:47.333272    6370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:2b:d9:2f:6a:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:47.335239    6370 main.go:141] libmachine: STDOUT: 
	I0729 16:55:47.335257    6370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:47.335284    6370 fix.go:56] duration metric: took 13.397584ms for fixHost
	I0729 16:55:47.335288    6370 start.go:83] releasing machines lock for "newest-cni-512000", held for 13.412916ms
	W0729 16:55:47.335295    6370 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:47.335325    6370 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:47.335330    6370 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:52.337269    6370 start.go:360] acquireMachinesLock for newest-cni-512000: {Name:mk470499a5bb946bcf40f861c45f96538796341b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:52.337693    6370 start.go:364] duration metric: took 295.208µs to acquireMachinesLock for "newest-cni-512000"
	I0729 16:55:52.337822    6370 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:52.337842    6370 fix.go:54] fixHost starting: 
	I0729 16:55:52.338554    6370 fix.go:112] recreateIfNeeded on newest-cni-512000: state=Stopped err=<nil>
	W0729 16:55:52.338581    6370 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:52.347246    6370 out.go:177] * Restarting existing qemu2 VM for "newest-cni-512000" ...
	I0729 16:55:52.351279    6370 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:52.351533    6370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:2b:d9:2f:6a:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19347-923/.minikube/machines/newest-cni-512000/disk.qcow2
	I0729 16:55:52.361168    6370 main.go:141] libmachine: STDOUT: 
	I0729 16:55:52.361252    6370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:52.361362    6370 fix.go:56] duration metric: took 23.52ms for fixHost
	I0729 16:55:52.361388    6370 start.go:83] releasing machines lock for "newest-cni-512000", held for 23.669709ms
	W0729 16:55:52.361640    6370 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:52.370228    6370 out.go:177] 
	W0729 16:55:52.374293    6370 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:52.374324    6370 out.go:239] * 
	* 
	W0729 16:55:52.376759    6370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:52.385170    6370 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-512000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000: exit status 7 (67.083833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-512000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000: exit status 7 (29.412667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-512000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-512000 --alsologtostderr -v=1: exit status 83 (41.072ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-512000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-512000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:52.565846    6387 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:52.566012    6387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:52.566016    6387 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:52.566018    6387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:52.566146    6387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:55:52.566375    6387 out.go:298] Setting JSON to false
	I0729 16:55:52.566381    6387 mustload.go:65] Loading cluster: newest-cni-512000
	I0729 16:55:52.566577    6387 config.go:182] Loaded profile config "newest-cni-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:55:52.570318    6387 out.go:177] * The control-plane node newest-cni-512000 host is not running: state=Stopped
	I0729 16:55:52.574137    6387 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-512000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-512000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000: exit status 7 (29.827375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-512000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000: exit status 7 (29.188375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 10.85
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.1
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 14.05
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 206.94
38 TestAddons/serial/Volcano 37.94
40 TestAddons/serial/GCPAuth/Namespaces 0.08
42 TestAddons/parallel/Registry 13.16
43 TestAddons/parallel/Ingress 18.16
44 TestAddons/parallel/InspektorGadget 10.21
45 TestAddons/parallel/MetricsServer 5.25
48 TestAddons/parallel/CSI 52.8
49 TestAddons/parallel/Headlamp 12.41
50 TestAddons/parallel/CloudSpanner 5.17
51 TestAddons/parallel/LocalPath 40.78
52 TestAddons/parallel/NvidiaDevicePlugin 5.14
53 TestAddons/parallel/Yakd 10.2
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.65
65 TestErrorSpam/setup 33.22
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.67
69 TestErrorSpam/unpause 0.63
70 TestErrorSpam/stop 64.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 50.99
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.58
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
82 TestFunctional/serial/CacheCmd/cache/add_local 1.12
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
90 TestFunctional/serial/ExtraConfig 37
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.66
93 TestFunctional/serial/LogsFileCmd 0.7
94 TestFunctional/serial/InvalidService 3.9
96 TestFunctional/parallel/ConfigCmd 0.21
97 TestFunctional/parallel/DashboardCmd 12.23
98 TestFunctional/parallel/DryRun 0.33
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 26.96
108 TestFunctional/parallel/SSHCmd 0.16
109 TestFunctional/parallel/CpCmd 0.38
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.37
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
120 TestFunctional/parallel/License 0.23
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.15
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.75
128 TestFunctional/parallel/ImageCommands/Setup 1.75
129 TestFunctional/parallel/DockerEnv/bash 0.27
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.13
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.35
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
151 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
152 TestFunctional/parallel/ServiceCmd/List 0.27
153 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
155 TestFunctional/parallel/ServiceCmd/Format 0.1
156 TestFunctional/parallel/ServiceCmd/URL 0.1
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.11
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 5.07
161 TestFunctional/parallel/MountCmd/specific-port 0.86
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 192.31
170 TestMultiControlPlane/serial/DeployApp 4.52
171 TestMultiControlPlane/serial/PingHostFromPods 0.76
172 TestMultiControlPlane/serial/AddWorkerNode 51.81
173 TestMultiControlPlane/serial/NodeLabels 0.13
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.26
175 TestMultiControlPlane/serial/CopyFile 4.35
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 80.22
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.04
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 3.47
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 1.27
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.52
286 TestNoKubernetes/serial/Stop 3.76
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
298 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
303 TestStartStop/group/old-k8s-version/serial/Stop 3.54
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/no-preload/serial/Stop 3.41
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
327 TestStartStop/group/embed-certs/serial/Stop 1.93
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.9
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
347 TestStartStop/group/newest-cni/serial/Stop 2.03
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-418000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-418000: exit status 85 (96.83175ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-418000 | jenkins | v1.33.1 | 29 Jul 24 16:02 PDT |          |
	|         | -p download-only-418000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:02:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:02:49.994403    1392 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:02:49.994559    1392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:02:49.994562    1392 out.go:304] Setting ErrFile to fd 2...
	I0729 16:02:49.994564    1392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:02:49.994711    1392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	W0729 16:02:49.994795    1392 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19347-923/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19347-923/.minikube/config/config.json: no such file or directory
	I0729 16:02:49.996103    1392 out.go:298] Setting JSON to true
	I0729 16:02:50.013134    1392 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":133,"bootTime":1722294037,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:02:50.013198    1392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:02:50.019081    1392 out.go:97] [download-only-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:02:50.019280    1392 notify.go:220] Checking for updates...
	W0729 16:02:50.019304    1392 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 16:02:50.021980    1392 out.go:169] MINIKUBE_LOCATION=19347
	I0729 16:02:50.024967    1392 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:02:50.030042    1392 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:02:50.033031    1392 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:02:50.035958    1392 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	W0729 16:02:50.042061    1392 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:02:50.042291    1392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:02:50.046991    1392 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:02:50.047009    1392 start.go:297] selected driver: qemu2
	I0729 16:02:50.047013    1392 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:02:50.047076    1392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:02:50.051037    1392 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:02:50.056643    1392 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:02:50.056721    1392 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:02:50.056779    1392 cni.go:84] Creating CNI manager for ""
	I0729 16:02:50.056796    1392 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:02:50.056846    1392 start.go:340] cluster config:
	{Name:download-only-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:02:50.061996    1392 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:02:50.065979    1392 out.go:97] Downloading VM boot image ...
	I0729 16:02:50.065999    1392 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 16:02:57.080741    1392 out.go:97] Starting "download-only-418000" primary control-plane node in "download-only-418000" cluster
	I0729 16:02:57.080760    1392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:02:57.141059    1392 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:02:57.141067    1392 cache.go:56] Caching tarball of preloaded images
	I0729 16:02:57.141214    1392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:02:57.144741    1392 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 16:02:57.144747    1392 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:02:57.221290    1392 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:03:06.760899    1392 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:06.761057    1392 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:07.456998    1392 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:03:07.457211    1392 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/download-only-418000/config.json ...
	I0729 16:03:07.457231    1392 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/download-only-418000/config.json: {Name:mk72b5783e5430eb4f6ffdc2d7a3ce3666a8e0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:03:07.457454    1392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:03:07.457652    1392 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 16:03:07.843287    1392 out.go:169] 
	W0729 16:03:07.849246    1392 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60 0x1069a9a60] Decompressors:map[bz2:0x14000167f90 gz:0x14000167f98 tar:0x14000167f40 tar.bz2:0x14000167f50 tar.gz:0x14000167f60 tar.xz:0x14000167f70 tar.zst:0x14000167f80 tbz2:0x14000167f50 tgz:0x14000167f60 txz:0x14000167f70 tzst:0x14000167f80 xz:0x14000167fa0 zip:0x14000167fb0 zst:0x14000167fa8] Getters:map[file:0x14000a13760 http:0x140007fc190 https:0x140007fc1e0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 16:03:07.849270    1392 out_reason.go:110] 
	W0729 16:03:07.855020    1392 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:03:07.859184    1392 out.go:169] 
	
	
	* The control-plane node download-only-418000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-418000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-418000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (10.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-818000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-818000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (10.853662083s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (10.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-818000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-818000: exit status 85 (74.867292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-418000 | jenkins | v1.33.1 | 29 Jul 24 16:02 PDT |                     |
	|         | -p download-only-418000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT | 29 Jul 24 16:03 PDT |
	| delete  | -p download-only-418000        | download-only-418000 | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT | 29 Jul 24 16:03 PDT |
	| start   | -o=json --download-only        | download-only-818000 | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT |                     |
	|         | -p download-only-818000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:03:08
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:03:08.263595    1451 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:03:08.263718    1451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:03:08.263721    1451 out.go:304] Setting ErrFile to fd 2...
	I0729 16:03:08.263724    1451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:03:08.263856    1451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:03:08.264868    1451 out.go:298] Setting JSON to true
	I0729 16:03:08.280781    1451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":151,"bootTime":1722294037,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:03:08.280853    1451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:03:08.284224    1451 out.go:97] [download-only-818000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:03:08.284301    1451 notify.go:220] Checking for updates...
	I0729 16:03:08.288062    1451 out.go:169] MINIKUBE_LOCATION=19347
	I0729 16:03:08.291127    1451 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:03:08.295186    1451 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:03:08.298076    1451 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:03:08.301127    1451 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	W0729 16:03:08.307043    1451 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:03:08.307170    1451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:03:08.310057    1451 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:03:08.310066    1451 start.go:297] selected driver: qemu2
	I0729 16:03:08.310071    1451 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:03:08.310118    1451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:03:08.313130    1451 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:03:08.318178    1451 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:03:08.318275    1451 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:03:08.318324    1451 cni.go:84] Creating CNI manager for ""
	I0729 16:03:08.318331    1451 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:03:08.318337    1451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:03:08.318374    1451 start.go:340] cluster config:
	{Name:download-only-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:03:08.321708    1451 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:03:08.325132    1451 out.go:97] Starting "download-only-818000" primary control-plane node in "download-only-818000" cluster
	I0729 16:03:08.325142    1451 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:03:08.382200    1451 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:03:08.382212    1451 cache.go:56] Caching tarball of preloaded images
	I0729 16:03:08.382359    1451 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:03:08.387466    1451 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 16:03:08.387473    1451 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:08.462424    1451 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:03:17.062351    1451 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:17.062514    1451 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:17.604715    1451 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:03:17.604905    1451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/download-only-818000/config.json ...
	I0729 16:03:17.604920    1451 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/download-only-818000/config.json: {Name:mkb1521bfe983074d4d588bd2f21d20da0d1042d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:03:17.605156    1451 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:03:17.605277    1451 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-818000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-818000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-818000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (14.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-994000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-994000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (14.052517584s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (14.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-994000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-994000: exit status 85 (76.400958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-418000 | jenkins | v1.33.1 | 29 Jul 24 16:02 PDT |                     |
	|         | -p download-only-418000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT | 29 Jul 24 16:03 PDT |
	| delete  | -p download-only-418000             | download-only-418000 | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT | 29 Jul 24 16:03 PDT |
	| start   | -o=json --download-only             | download-only-818000 | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT |                     |
	|         | -p download-only-818000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT | 29 Jul 24 16:03 PDT |
	| delete  | -p download-only-818000             | download-only-818000 | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT | 29 Jul 24 16:03 PDT |
	| start   | -o=json --download-only             | download-only-994000 | jenkins | v1.33.1 | 29 Jul 24 16:03 PDT |                     |
	|         | -p download-only-994000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:03:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:03:19.395422    1479 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:03:19.395561    1479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:03:19.395565    1479 out.go:304] Setting ErrFile to fd 2...
	I0729 16:03:19.395567    1479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:03:19.395703    1479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:03:19.396740    1479 out.go:298] Setting JSON to true
	I0729 16:03:19.412642    1479 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":162,"bootTime":1722294037,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:03:19.412702    1479 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:03:19.417129    1479 out.go:97] [download-only-994000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:03:19.417205    1479 notify.go:220] Checking for updates...
	I0729 16:03:19.420101    1479 out.go:169] MINIKUBE_LOCATION=19347
	I0729 16:03:19.424134    1479 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:03:19.428133    1479 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:03:19.431140    1479 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:03:19.434156    1479 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	W0729 16:03:19.440041    1479 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:03:19.440194    1479 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:03:19.443047    1479 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:03:19.443057    1479 start.go:297] selected driver: qemu2
	I0729 16:03:19.443062    1479 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:03:19.443116    1479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:03:19.446159    1479 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:03:19.449647    1479 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:03:19.449732    1479 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:03:19.449754    1479 cni.go:84] Creating CNI manager for ""
	I0729 16:03:19.449761    1479 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:03:19.449765    1479 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:03:19.449809    1479 start.go:340] cluster config:
	{Name:download-only-994000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-994000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:03:19.453210    1479 iso.go:125] acquiring lock: {Name:mk86c45be59c417774b614a249206c386d8d7c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:03:19.456066    1479 out.go:97] Starting "download-only-994000" primary control-plane node in "download-only-994000" cluster
	I0729 16:03:19.456074    1479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:03:19.515428    1479 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:03:19.515452    1479 cache.go:56] Caching tarball of preloaded images
	I0729 16:03:19.515622    1479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:03:19.519740    1479 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 16:03:19.519748    1479 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:03:19.599960    1479 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19347-923/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-994000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-994000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-994000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-519000 --alsologtostderr --binary-mirror http://127.0.0.1:49323 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-519000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-519000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-529000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-529000: exit status 85 (54.585708ms)

                                                
                                                
-- stdout --
	* Profile "addons-529000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-529000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-529000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-529000: exit status 85 (58.407375ms)

                                                
                                                
-- stdout --
	* Profile "addons-529000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-529000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (206.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-529000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-529000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m26.938206791s)
--- PASS: TestAddons/Setup (206.94s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.94s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.378875ms
addons_test.go:905: volcano-admission stabilized in 7.478833ms
addons_test.go:897: volcano-scheduler stabilized in 7.549875ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-2wpdf" [d66b6f4d-6b70-4559-baaf-d09c43bc1af8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003881166s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-tqjpw" [7a792803-7bb5-4ce5-899e-63a9ea1233a3] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004159959s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-f5hls" [d6b0ea40-1d73-4225-aa68-53f6db21e92e] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003832083s
addons_test.go:932: (dbg) Run:  kubectl --context addons-529000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-529000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-529000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f248001b-dac1-4364-919c-156ae008ed77] Pending
helpers_test.go:344: "test-job-nginx-0" [f248001b-dac1-4364-919c-156ae008ed77] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f248001b-dac1-4364-919c-156ae008ed77] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003847958s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-529000 addons disable volcano --alsologtostderr -v=1: (9.714409416s)
--- PASS: TestAddons/serial/Volcano (37.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-529000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-529000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.116458ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-j5kbn" [2b631f5a-5d0b-4bd3-b9d4-b6fd99e0c08f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004340833s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gvvr5" [591ee92a-f77d-475f-b251-bda75a835756] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003865542s
addons_test.go:342: (dbg) Run:  kubectl --context addons-529000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-529000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-529000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.879868s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 ip
2024/07/29 16:08:09 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.16s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-529000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-529000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-529000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c0909ddc-1038-480f-9327-ce445a34bf9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c0909ddc-1038-480f-9327-ce445a34bf9a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004110292s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-529000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-529000 addons disable ingress --alsologtostderr -v=1: (7.19469375s)
--- PASS: TestAddons/parallel/Ingress (18.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6mvpl" [e2f47ade-d0b5-436f-abea-231ca1290ab5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003971959s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-529000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-529000: (5.203460208s)
--- PASS: TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.391083ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-jksfq" [3c4414f9-9066-44f7-8779-b2ce109f18e5] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004210083s
addons_test.go:417: (dbg) Run:  kubectl --context addons-529000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.822125ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-529000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-529000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4c1de9a6-4560-490a-9f53-04e1c9feceb7] Pending
helpers_test.go:344: "task-pv-pod" [4c1de9a6-4560-490a-9f53-04e1c9feceb7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4c1de9a6-4560-490a-9f53-04e1c9feceb7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.001503458s
addons_test.go:590: (dbg) Run:  kubectl --context addons-529000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-529000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-529000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-529000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-529000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-529000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-529000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [98bb1cae-8847-4117-b820-280e3c7450da] Pending
helpers_test.go:344: "task-pv-pod-restore" [98bb1cae-8847-4117-b820-280e3c7450da] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [98bb1cae-8847-4117-b820-280e3c7450da] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003725791s
addons_test.go:632: (dbg) Run:  kubectl --context addons-529000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-529000 delete pod task-pv-pod-restore: (1.01304725s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-529000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-529000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-529000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.0787285s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-529000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-2st2x" [b87edbb5-b8e2-472e-ac0c-a02bf3c41e3c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-2st2x" [b87edbb5-b8e2-472e-ac0c-a02bf3c41e3c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003427625s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-jpzq6" [61f8c801-ee4e-4e06-af49-d7eecf0f36f9] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003384667s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-529000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-529000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-529000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-529000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d79a2fc1-bda5-43f9-8d8a-fbe06ccb7a54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d79a2fc1-bda5-43f9-8d8a-fbe06ccb7a54] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d79a2fc1-bda5-43f9-8d8a-fbe06ccb7a54] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003564667s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-529000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 ssh "cat /opt/local-path-provisioner/pvc-c84d49d8-417c-416e-bc21-fb5edab2942c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-529000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-529000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-529000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.333742541s)
--- PASS: TestAddons/parallel/LocalPath (40.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qdrlj" [f97ff90c-bb70-4120-b52d-c9e684e1e0cb] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004100375s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-529000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-z8jf9" [7532f8f8-de03-417b-b8cd-331262b7dd4a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003619709s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-529000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-529000 addons disable yakd --alsologtostderr -v=1: (5.198793709s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-529000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-529000: (12.201067667s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-529000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-529000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-529000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.65s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.65s)

                                                
                                    
x
+
TestErrorSpam/setup (33.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-043000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-043000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 --driver=qemu2 : (33.224730167s)
--- PASS: TestErrorSpam/setup (33.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 stop: (12.201050958s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 stop: (26.059825625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-043000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-043000 stop: (26.024568917s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19347-923/.minikube/files/etc/test/nested/copy/1390/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0729 16:12:01.324929    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:01.331840    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:01.343889    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:01.365967    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:01.408099    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:01.490162    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:01.652237    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:01.974327    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:02.616504    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:03.898617    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:06.460733    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:11.582793    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-753000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (50.984900541s)
--- PASS: TestFunctional/serial/StartWithProxy (50.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --alsologtostderr -v=8
E0729 16:12:21.824827    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:12:42.306669    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-753000 --alsologtostderr -v=8: (37.575033333s)
functional_test.go:659: soft start took 37.575425708s for "functional-753000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-753000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3072705578/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add minikube-local-cache-test:functional-753000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache delete minikube-local-cache-test:functional-753000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-753000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.107458ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 kubectl -- --context functional-753000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-753000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 16:13:23.268159    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-753000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.000798208s)
functional_test.go:757: restart took 37.000926667s for "functional-753000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-753000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd5883841/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-753000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-753000: exit status 115 (97.663375ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30603 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-753000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 config get cpus: exit status 14 (27.845375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 config get cpus: exit status 14 (28.590167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-753000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-753000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2333: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (217.213458ms)

                                                
                                                
-- stdout --
	* [functional-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:14:24.962948    2312 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:14:24.963144    2312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:14:24.963147    2312 out.go:304] Setting ErrFile to fd 2...
	I0729 16:14:24.963150    2312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:14:24.963378    2312 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:14:24.967578    2312 out.go:298] Setting JSON to false
	I0729 16:14:24.991870    2312 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":827,"bootTime":1722294037,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:14:24.991938    2312 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:14:25.000833    2312 out.go:177] * [functional-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:14:25.008982    2312 notify.go:220] Checking for updates...
	I0729 16:14:25.011853    2312 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:14:25.023772    2312 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:14:25.037876    2312 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:14:25.051831    2312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:14:25.065827    2312 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:14:25.075845    2312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:14:25.088142    2312 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:14:25.088375    2312 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:14:25.100138    2312 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:14:25.111819    2312 start.go:297] selected driver: qemu2
	I0729 16:14:25.111828    2312 start.go:901] validating driver "qemu2" against &{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:14:25.111901    2312 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:14:25.118921    2312 out.go:177] 
	W0729 16:14:25.122797    2312 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 16:14:25.126779    2312 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.191583ms)

                                                
                                                
-- stdout --
	* [functional-753000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:14:24.834860    2307 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:14:24.834964    2307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:14:24.834967    2307 out.go:304] Setting ErrFile to fd 2...
	I0729 16:14:24.834969    2307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:14:24.835099    2307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
	I0729 16:14:24.836447    2307 out.go:298] Setting JSON to false
	I0729 16:14:24.854818    2307 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":827,"bootTime":1722294037,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 16:14:24.854890    2307 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:14:24.859697    2307 out.go:177] * [functional-753000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 16:14:24.867846    2307 out.go:177]   - MINIKUBE_LOCATION=19347
	I0729 16:14:24.867872    2307 notify.go:220] Checking for updates...
	I0729 16:14:24.873791    2307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	I0729 16:14:24.876842    2307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:14:24.879850    2307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:14:24.882843    2307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	I0729 16:14:24.885830    2307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:14:24.889154    2307 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:14:24.889403    2307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:14:24.893792    2307 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 16:14:24.899776    2307 start.go:297] selected driver: qemu2
	I0729 16:14:24.899782    2307 start.go:901] validating driver "qemu2" against &{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:14:24.899830    2307 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:14:24.905832    2307 out.go:177] 
	W0729 16:14:24.909851    2307 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 16:14:24.912817    2307 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [914a5f06-d2fb-4702-8bb3-ec79da5263eb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004202584s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-753000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-753000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f8fbc925-b317-4782-ad60-dcbb8ad93ac3] Pending
helpers_test.go:344: "sp-pod" [f8fbc925-b317-4782-ad60-dcbb8ad93ac3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f8fbc925-b317-4782-ad60-dcbb8ad93ac3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003842167s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-753000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-753000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8c56babe-3569-456e-b54c-7afb2d6e843f] Pending
helpers_test.go:344: "sp-pod" [8c56babe-3569-456e-b54c-7afb2d6e843f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8c56babe-3569-456e-b54c-7afb2d6e843f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003879417s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-753000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -n functional-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cp functional-753000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3082080764/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -n functional-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -n functional-753000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1390/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/test/nested/copy/1390/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1390.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/1390.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1390.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /usr/share/ca-certificates/1390.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/13902.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /usr/share/ca-certificates/13902.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-753000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "sudo systemctl is-active crio": exit status 1 (59.635167ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-753000
docker.io/kicbase/echo-server:functional-753000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format short --alsologtostderr:
I0729 16:14:26.785969    2350 out.go:291] Setting OutFile to fd 1 ...
I0729 16:14:26.786143    2350 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:26.786146    2350 out.go:304] Setting ErrFile to fd 2...
I0729 16:14:26.786148    2350 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:26.786282    2350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
I0729 16:14:26.786727    2350 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:26.786789    2350 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:26.787650    2350 ssh_runner.go:195] Run: systemctl --version
I0729 16:14:26.787658    2350 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
I0729 16:14:26.811919    2350 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kicbase/echo-server               | functional-753000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-753000 | 09c7ab10f4ca0 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/localhost/my-image                | functional-753000 | d0473d6dad6f6 | 1.41MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format table --alsologtostderr:
I0729 16:14:28.742080    2363 out.go:291] Setting OutFile to fd 1 ...
I0729 16:14:28.742252    2363 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:28.742255    2363 out.go:304] Setting ErrFile to fd 2...
I0729 16:14:28.742258    2363 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:28.742398    2363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
I0729 16:14:28.742811    2363 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:28.742874    2363 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:28.743709    2363 ssh_runner.go:195] Run: systemctl --version
I0729 16:14:28.743717    2363 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
I0729 16:14:28.765815    2363 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/07/29 16:14:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format json --alsologtostderr:
[{"id":"09c7ab10f4ca088a794ad46bb23304cccec8149c80b96a6088e6f2215933263d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-753000"],"size":"30"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d0473d6dad6f68002c3e9abba0877c86941b670c37e647102f63da91c00fdee5","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-753000"],"size":"1410000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab5
36bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.
k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-753000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":
"85000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format json --alsologtostderr:
I0729 16:14:28.673823    2361 out.go:291] Setting OutFile to fd 1 ...
I0729 16:14:28.673953    2361 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:28.673958    2361 out.go:304] Setting ErrFile to fd 2...
I0729 16:14:28.673961    2361 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:28.674095    2361 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
I0729 16:14:28.674547    2361 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:28.674607    2361 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:28.675449    2361 ssh_runner.go:195] Run: systemctl --version
I0729 16:14:28.675458    2361 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
I0729 16:14:28.697732    2361 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format yaml --alsologtostderr:
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-753000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 09c7ab10f4ca088a794ad46bb23304cccec8149c80b96a6088e6f2215933263d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-753000
size: "30"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format yaml --alsologtostderr:
I0729 16:14:26.855953    2352 out.go:291] Setting OutFile to fd 1 ...
I0729 16:14:26.856109    2352 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:26.856113    2352 out.go:304] Setting ErrFile to fd 2...
I0729 16:14:26.856116    2352 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:26.856242    2352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
I0729 16:14:26.856692    2352 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:26.856751    2352 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:26.857554    2352 ssh_runner.go:195] Run: systemctl --version
I0729 16:14:26.857563    2352 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
I0729 16:14:26.884844    2352 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh pgrep buildkitd: exit status 1 (54.642583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image build -t localhost/my-image:functional-753000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 image build -t localhost/my-image:functional-753000 testdata/build --alsologtostderr: (1.621001042s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image build -t localhost/my-image:functional-753000 testdata/build --alsologtostderr:
I0729 16:14:26.982672    2356 out.go:291] Setting OutFile to fd 1 ...
I0729 16:14:26.982900    2356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:26.982907    2356 out.go:304] Setting ErrFile to fd 2...
I0729 16:14:26.982910    2356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:14:26.983046    2356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19347-923/.minikube/bin
I0729 16:14:26.983457    2356 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:26.984205    2356 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:14:26.985114    2356 ssh_runner.go:195] Run: systemctl --version
I0729 16:14:26.985122    2356 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19347-923/.minikube/machines/functional-753000/id_rsa Username:docker}
I0729 16:14:27.007458    2356 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1594975022.tar
I0729 16:14:27.007517    2356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 16:14:27.011134    2356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1594975022.tar
I0729 16:14:27.012658    2356 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1594975022.tar: stat -c "%s %y" /var/lib/minikube/build/build.1594975022.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1594975022.tar': No such file or directory
I0729 16:14:27.012674    2356 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1594975022.tar --> /var/lib/minikube/build/build.1594975022.tar (3072 bytes)
I0729 16:14:27.021138    2356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1594975022
I0729 16:14:27.024514    2356 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1594975022 -xf /var/lib/minikube/build/build.1594975022.tar
I0729 16:14:27.027544    2356 docker.go:360] Building image: /var/lib/minikube/build/build.1594975022
I0729 16:14:27.027586    2356 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-753000 /var/lib/minikube/build/build.1594975022
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:d0473d6dad6f68002c3e9abba0877c86941b670c37e647102f63da91c00fdee5 done
#8 naming to localhost/my-image:functional-753000 done
#8 DONE 0.0s
I0729 16:14:28.505671    2356 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-753000 /var/lib/minikube/build/build.1594975022: (1.478095542s)
I0729 16:14:28.505750    2356 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1594975022
I0729 16:14:28.510653    2356 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1594975022.tar
I0729 16:14:28.514358    2356 build_images.go:217] Built localhost/my-image:functional-753000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1594975022.tar
I0729 16:14:28.514380    2356 build_images.go:133] succeeded building to: functional-753000
I0729 16:14:28.514384    2356 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.736739916s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-753000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-753000 docker-env) && out/minikube-darwin-arm64 status -p functional-753000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-753000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2115: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f3ef321a-1c69-4000-ba6f-a9b35a47e3d9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f3ef321a-1c69-4000-ba6f-a9b35a47e3d9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004016s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-753000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image save docker.io/kicbase/echo-server:functional-753000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image rm docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-753000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image save --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-753000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-753000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.71.104 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-753000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-753000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-qwt2w" [24923419-353a-4823-8e45-8fc069d43997] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-qwt2w" [24923419-353a-4823-8e45-8fc069d43997] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.0040495s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service list -o json
functional_test.go:1490: Took "274.983833ms" to run "out/minikube-darwin-arm64 -p functional-753000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:31956
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:31956
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "80.001583ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.858125ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "83.2745ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.739875ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3810492085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722294859006997000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3810492085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722294859006997000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3810492085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722294859006997000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3810492085/001/test-1722294859006997000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (56.285542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 23:14 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 23:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 23:14 test-1722294859006997000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh cat /mount-9p/test-1722294859006997000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-753000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e957466e-41c7-4f7d-82c2-451e8366802e] Pending
helpers_test.go:344: "busybox-mount" [e957466e-41c7-4f7d-82c2-451e8366802e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e957466e-41c7-4f7d-82c2-451e8366802e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e957466e-41c7-4f7d-82c2-451e8366802e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003895292s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-753000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3810492085/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port126510783/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.175458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port126510783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "sudo umount -f /mount-9p": exit status 1 (59.492542ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-753000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port126510783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3225498984/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3225498984/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3225498984/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount1: (1.319425125s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-753000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3225498984/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3225498984/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3225498984/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-753000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-753000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-753000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-291000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0729 16:14:45.189000    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:17:01.322258    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:17:29.030775    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-291000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m12.112123s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (192.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-291000 -- rollout status deployment/busybox: (3.044394083s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-25tth -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-4mltt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-7s2ll -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-25tth -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-4mltt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-7s2ll -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-25tth -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-4mltt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-7s2ll -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-25tth -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-25tth -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-4mltt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-4mltt -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-7s2ll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-291000 -- exec busybox-fc5497c4f-7s2ll -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-291000 -v=7 --alsologtostderr
E0729 16:18:41.919709    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:41.926097    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:41.938153    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:41.960223    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:42.002310    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:42.084457    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:42.246575    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:42.568541    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:43.208955    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0729 16:18:44.491061    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-291000 -v=7 --alsologtostderr: (51.572915875s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 status -v=7 --alsologtostderr
E0729 16:18:47.051240    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-291000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp testdata/cp-test.txt ha-291000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2201020633/001/cp-test_ha-291000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000:/home/docker/cp-test.txt ha-291000-m02:/home/docker/cp-test_ha-291000_ha-291000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test_ha-291000_ha-291000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000:/home/docker/cp-test.txt ha-291000-m03:/home/docker/cp-test_ha-291000_ha-291000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test_ha-291000_ha-291000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000:/home/docker/cp-test.txt ha-291000-m04:/home/docker/cp-test_ha-291000_ha-291000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test_ha-291000_ha-291000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp testdata/cp-test.txt ha-291000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2201020633/001/cp-test_ha-291000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m02:/home/docker/cp-test.txt ha-291000:/home/docker/cp-test_ha-291000-m02_ha-291000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test_ha-291000-m02_ha-291000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m02:/home/docker/cp-test.txt ha-291000-m03:/home/docker/cp-test_ha-291000-m02_ha-291000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test_ha-291000-m02_ha-291000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m02:/home/docker/cp-test.txt ha-291000-m04:/home/docker/cp-test_ha-291000-m02_ha-291000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test_ha-291000-m02_ha-291000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp testdata/cp-test.txt ha-291000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2201020633/001/cp-test_ha-291000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m03:/home/docker/cp-test.txt ha-291000:/home/docker/cp-test_ha-291000-m03_ha-291000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test_ha-291000-m03_ha-291000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m03:/home/docker/cp-test.txt ha-291000-m02:/home/docker/cp-test_ha-291000-m03_ha-291000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test_ha-291000-m03_ha-291000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m03:/home/docker/cp-test.txt ha-291000-m04:/home/docker/cp-test_ha-291000-m03_ha-291000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test_ha-291000-m03_ha-291000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp testdata/cp-test.txt ha-291000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2201020633/001/cp-test_ha-291000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m04:/home/docker/cp-test.txt ha-291000:/home/docker/cp-test_ha-291000-m04_ha-291000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000 "sudo cat /home/docker/cp-test_ha-291000-m04_ha-291000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m04:/home/docker/cp-test.txt ha-291000-m02:/home/docker/cp-test_ha-291000-m04_ha-291000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m02 "sudo cat /home/docker/cp-test_ha-291000-m04_ha-291000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 cp ha-291000-m04:/home/docker/cp-test.txt ha-291000-m03:/home/docker/cp-test_ha-291000-m04_ha-291000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-291000 ssh -n ha-291000-m03 "sudo cat /home/docker/cp-test_ha-291000-m04_ha-291000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0729 16:28:24.381817    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/addons-529000/client.crt: no such file or directory
E0729 16:28:41.909185    1390 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19347-923/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m20.218909584s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-254000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-254000 --output=json --user=testUser: (3.474366375s)
--- PASS: TestJSONOutput/stop/Command (3.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-250000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-250000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.27925ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9dea084f-abc2-4808-9c2d-b0958c770c8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-250000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdf05c25-a2e5-45a5-8c15-224f793d106d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19347"}}
	{"specversion":"1.0","id":"492ed681-6e55-4e76-ae3e-46c51b80dd77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig"}}
	{"specversion":"1.0","id":"b5d9c99b-64b2-4df5-8cf4-0f650a79ae3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"591d918e-ee38-4547-beab-46d34cd174bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"838b8c11-8f60-436a-b87a-00c97dfa739b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube"}}
	{"specversion":"1.0","id":"16ddb212-4858-48fc-bf81-30fa7f8baba3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"161ca322-dd1a-447d-b6c0-5289be9e658d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-250000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-250000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.913166ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19347
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19347-923/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19347-923/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.07075ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-934000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-934000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.690408791s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.827425417s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-934000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-934000: (3.761218375s)
--- PASS: TestNoKubernetes/serial/Stop (3.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.479667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-934000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-934000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-480000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-356000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-356000 --alsologtostderr -v=3: (3.544013375s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (53.928083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-356000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-687000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-687000 --alsologtostderr -v=3: (3.405069125s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-687000 -n no-preload-687000: exit status 7 (55.675375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-687000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-958000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-958000 --alsologtostderr -v=3: (1.932164791s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-958000 -n embed-certs-958000: exit status 7 (54.545875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-958000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-321000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-321000 --alsologtostderr -v=3: (1.900522625s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (58.699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-321000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-512000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-512000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-512000 --alsologtostderr -v=3: (2.027668959s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-512000 -n newest-cni-512000: exit status 7 (62.393958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-512000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-600000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-600000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-600000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-600000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600000"

                                                
                                                
----------------------- debugLogs end: cilium-600000 [took: 2.148157375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-600000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-600000
--- SKIP: TestNetworkPlugins/group/cilium (2.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-422000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-422000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard