Test Report: QEMU_macOS 19598

                    
                      cb70ad94d69a229bf8d3511a5a00af396fa2386e:2024-09-10:36157
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.31
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.09
33 TestAddons/parallel/Registry 71.35
46 TestCertOptions 12.26
47 TestCertExpiration 197.7
48 TestDockerFlags 12.91
49 TestForceSystemdFlag 12.93
50 TestForceSystemdEnv 10.39
95 TestFunctional/parallel/ServiceCmdConnect 40.57
167 TestMultiControlPlane/serial/StopSecondaryNode 115.96
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 53.67
169 TestMultiControlPlane/serial/RestartSecondaryNode 110.51
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.29
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 103.91
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.19
184 TestJSONOutput/start/Command 9.84
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.04
213 TestMinikubeProfile 10.16
216 TestMountStart/serial/StartWithMountFirst 10.03
219 TestMultiNode/serial/FreshStart2Nodes 9.86
220 TestMultiNode/serial/DeployApp2Nodes 84.16
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 38.97
228 TestMultiNode/serial/RestartKeepsNodes 8.94
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.83
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.04
236 TestPreload 10.05
238 TestScheduledStopUnix 10.13
239 TestSkaffold 12.31
242 TestRunningBinaryUpgrade 615.09
244 TestKubernetesUpgrade 18.59
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.26
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.14
260 TestStoppedBinaryUpgrade/Upgrade 572.72
262 TestPause/serial/Start 9.86
272 TestNoKubernetes/serial/StartWithK8s 9.85
273 TestNoKubernetes/serial/StartWithStopK8s 5.31
274 TestNoKubernetes/serial/Start 5.29
278 TestNoKubernetes/serial/StartNoArgs 5.3
280 TestNetworkPlugins/group/auto/Start 9.91
281 TestNetworkPlugins/group/kindnet/Start 9.88
282 TestNetworkPlugins/group/calico/Start 9.84
283 TestNetworkPlugins/group/custom-flannel/Start 9.93
284 TestNetworkPlugins/group/false/Start 9.85
285 TestNetworkPlugins/group/enable-default-cni/Start 9.88
286 TestNetworkPlugins/group/flannel/Start 10.01
287 TestNetworkPlugins/group/bridge/Start 10
289 TestNetworkPlugins/group/kubenet/Start 9.93
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.94
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/no-preload/serial/FirstStart 10.14
298 TestStartStop/group/old-k8s-version/serial/SecondStart 6.95
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.09
302 TestStartStop/group/old-k8s-version/serial/Pause 0.11
304 TestStartStop/group/embed-certs/serial/FirstStart 11.73
305 TestStartStop/group/no-preload/serial/DeployApp 0.1
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
309 TestStartStop/group/no-preload/serial/SecondStart 5.82
310 TestStartStop/group/embed-certs/serial/DeployApp 0.1
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
315 TestStartStop/group/no-preload/serial/Pause 0.11
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
320 TestStartStop/group/embed-certs/serial/SecondStart 6.28
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
326 TestStartStop/group/embed-certs/serial/Pause 0.12
329 TestStartStop/group/newest-cni/serial/FirstStart 10.15
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.92
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.25
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-581000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-581000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.313198125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"290f20d5-61f5-4564-b62a-5e874a1007d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-581000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cb7ee68-757f-4ba2-9ad1-f7c6117fca32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"d97d4124-d60f-4faa-a35b-3571e01f97c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig"}}
	{"specversion":"1.0","id":"1f2ec5b0-3566-43a2-a2e2-123a84f9ac75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e0a8e3f1-2f3e-4e44-9d4f-6815397de259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ecc4a4b0-227d-44a9-8c31-bb2818d7d554","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube"}}
	{"specversion":"1.0","id":"8ff9e4dd-4830-4218-b044-160d6cc67b69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e7a6b645-1bad-4ff4-af73-0080adb7064d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"59a298bd-9b6d-4dca-965c-9b41a9f002e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2bdeec61-712c-4918-adff-81a8c0eabe84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3551e7f4-a24d-401c-bad3-3fbf6ece152e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-581000\" primary control-plane node in \"download-only-581000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8d2eebf-ae9b-43cc-afbe-392be7f57dc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc830339-6303-459d-ab75-e8a1bc6e547a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020] Decompressors:map[bz2:0x140007fb700 gz:0x140007fb708 tar:0x140007fb6b0 tar.bz2:0x140007fb6c0 tar.gz:0x140007fb6d0 tar.xz:0x140007fb6e0 tar.zst:0x140007fb6f0 tbz2:0x140007fb6c0 tgz:0x14
0007fb6d0 txz:0x140007fb6e0 tzst:0x140007fb6f0 xz:0x140007fb710 zip:0x140007fb720 zst:0x140007fb718] Getters:map[file:0x14001404650 http:0x140004f8230 https:0x140004f8280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"197f589e-5d87-40e1-8d98-b09a51cb0c2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 10:28:20.652600    1797 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:28:20.652745    1797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:20.652748    1797 out.go:358] Setting ErrFile to fd 2...
	I0910 10:28:20.652750    1797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:20.652880    1797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	W0910 10:28:20.652971    1797 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19598-1276/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19598-1276/.minikube/config/config.json: no such file or directory
	I0910 10:28:20.654255    1797 out.go:352] Setting JSON to true
	I0910 10:28:20.671245    1797 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1664,"bootTime":1725987636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 10:28:20.671321    1797 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 10:28:20.677438    1797 out.go:97] [download-only-581000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 10:28:20.677569    1797 notify.go:220] Checking for updates...
	W0910 10:28:20.677578    1797 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 10:28:20.681277    1797 out.go:169] MINIKUBE_LOCATION=19598
	I0910 10:28:20.684303    1797 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:28:20.689368    1797 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 10:28:20.692302    1797 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 10:28:20.695325    1797 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	W0910 10:28:20.701323    1797 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 10:28:20.701516    1797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 10:28:20.706354    1797 out.go:97] Using the qemu2 driver based on user configuration
	I0910 10:28:20.706377    1797 start.go:297] selected driver: qemu2
	I0910 10:28:20.706381    1797 start.go:901] validating driver "qemu2" against <nil>
	I0910 10:28:20.706461    1797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 10:28:20.709376    1797 out.go:169] Automatically selected the socket_vmnet network
	I0910 10:28:20.715028    1797 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0910 10:28:20.715118    1797 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 10:28:20.715210    1797 cni.go:84] Creating CNI manager for ""
	I0910 10:28:20.715226    1797 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 10:28:20.715273    1797 start.go:340] cluster config:
	{Name:download-only-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:28:20.720452    1797 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 10:28:20.725309    1797 out.go:97] Downloading VM boot image ...
	I0910 10:28:20.725322    1797 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso
	I0910 10:28:28.936338    1797 out.go:97] Starting "download-only-581000" primary control-plane node in "download-only-581000" cluster
	I0910 10:28:28.936358    1797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 10:28:28.996348    1797 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 10:28:28.996356    1797 cache.go:56] Caching tarball of preloaded images
	I0910 10:28:28.996507    1797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 10:28:29.001634    1797 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0910 10:28:29.001641    1797 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 10:28:29.077377    1797 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 10:28:34.604768    1797 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 10:28:34.604933    1797 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 10:28:35.300660    1797 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0910 10:28:35.300865    1797 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/download-only-581000/config.json ...
	I0910 10:28:35.300882    1797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/download-only-581000/config.json: {Name:mk0d9555d9ba472361af6b5a19e01c658b692478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:28:35.301105    1797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 10:28:35.301311    1797 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0910 10:28:35.885001    1797 out.go:193] 
	W0910 10:28:35.890735    1797 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020] Decompressors:map[bz2:0x140007fb700 gz:0x140007fb708 tar:0x140007fb6b0 tar.bz2:0x140007fb6c0 tar.gz:0x140007fb6d0 tar.xz:0x140007fb6e0 tar.zst:0x140007fb6f0 tbz2:0x140007fb6c0 tgz:0x140007fb6d0 txz:0x140007fb6e0 tzst:0x140007fb6f0 xz:0x140007fb710 zip:0x140007fb720 zst:0x140007fb718] Getters:map[file:0x14001404650 http:0x140004f8230 https:0x140004f8280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0910 10:28:35.890768    1797 out_reason.go:110] 
	W0910 10:28:35.901860    1797 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 10:28:35.906807    1797 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-581000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-249000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-249000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.932551958s)

                                                
                                                
-- stdout --
	* [offline-docker-249000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-249000" primary control-plane node in "offline-docker-249000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-249000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:06:18.048063    4907 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:06:18.048212    4907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:18.048215    4907 out.go:358] Setting ErrFile to fd 2...
	I0910 11:06:18.048225    4907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:18.048369    4907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:06:18.049488    4907 out.go:352] Setting JSON to false
	I0910 11:06:18.067006    4907 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3942,"bootTime":1725987636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:06:18.067089    4907 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:06:18.071368    4907 out.go:177] * [offline-docker-249000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:06:18.079314    4907 notify.go:220] Checking for updates...
	I0910 11:06:18.084304    4907 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:06:18.087326    4907 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:06:18.090241    4907 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:06:18.094277    4907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:06:18.097198    4907 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:06:18.100262    4907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:06:18.103597    4907 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:06:18.103658    4907 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:06:18.107207    4907 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:06:18.114233    4907 start.go:297] selected driver: qemu2
	I0910 11:06:18.114244    4907 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:06:18.114252    4907 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:06:18.116374    4907 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:06:18.119230    4907 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:06:18.122312    4907 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:06:18.122329    4907 cni.go:84] Creating CNI manager for ""
	I0910 11:06:18.122335    4907 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:06:18.122339    4907 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:06:18.122368    4907 start.go:340] cluster config:
	{Name:offline-docker-249000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-249000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:06:18.126225    4907 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:06:18.133241    4907 out.go:177] * Starting "offline-docker-249000" primary control-plane node in "offline-docker-249000" cluster
	I0910 11:06:18.137227    4907 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:06:18.137264    4907 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:06:18.137275    4907 cache.go:56] Caching tarball of preloaded images
	I0910 11:06:18.137359    4907 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:06:18.137365    4907 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:06:18.137433    4907 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/offline-docker-249000/config.json ...
	I0910 11:06:18.137443    4907 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/offline-docker-249000/config.json: {Name:mk8ae5f475f0694d4ad10c60cc496729c85a3e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:06:18.137679    4907 start.go:360] acquireMachinesLock for offline-docker-249000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:18.137710    4907 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "offline-docker-249000"
	I0910 11:06:18.137721    4907 start.go:93] Provisioning new machine with config: &{Name:offline-docker-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-249000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:18.137769    4907 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:18.146285    4907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:18.162448    4907 start.go:159] libmachine.API.Create for "offline-docker-249000" (driver="qemu2")
	I0910 11:06:18.162473    4907 client.go:168] LocalClient.Create starting
	I0910 11:06:18.162557    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:18.162587    4907 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:18.162594    4907 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:18.162635    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:18.162661    4907 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:18.162674    4907 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:18.163057    4907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:18.322082    4907 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:18.502322    4907 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:18.502333    4907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:18.505909    4907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2
	I0910 11:06:18.522885    4907 main.go:141] libmachine: STDOUT: 
	I0910 11:06:18.522914    4907 main.go:141] libmachine: STDERR: 
	I0910 11:06:18.522985    4907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2 +20000M
	I0910 11:06:18.533960    4907 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:18.533991    4907 main.go:141] libmachine: STDERR: 
	I0910 11:06:18.534028    4907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2
	I0910 11:06:18.534036    4907 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:18.534049    4907 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:18.534091    4907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:4b:12:d0:9d:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2
	I0910 11:06:18.536228    4907 main.go:141] libmachine: STDOUT: 
	I0910 11:06:18.536249    4907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:18.536268    4907 client.go:171] duration metric: took 373.800167ms to LocalClient.Create
	I0910 11:06:20.538426    4907 start.go:128] duration metric: took 2.400697166s to createHost
	I0910 11:06:20.538454    4907 start.go:83] releasing machines lock for "offline-docker-249000", held for 2.400803459s
	W0910 11:06:20.538481    4907 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:20.565832    4907 out.go:177] * Deleting "offline-docker-249000" in qemu2 ...
	W0910 11:06:20.586066    4907 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:20.586078    4907 start.go:729] Will try again in 5 seconds ...
	I0910 11:06:25.588022    4907 start.go:360] acquireMachinesLock for offline-docker-249000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:25.588140    4907 start.go:364] duration metric: took 93.666µs to acquireMachinesLock for "offline-docker-249000"
	I0910 11:06:25.588175    4907 start.go:93] Provisioning new machine with config: &{Name:offline-docker-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-249000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:25.588259    4907 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:25.601705    4907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:25.617516    4907 start.go:159] libmachine.API.Create for "offline-docker-249000" (driver="qemu2")
	I0910 11:06:25.617542    4907 client.go:168] LocalClient.Create starting
	I0910 11:06:25.617603    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:25.617637    4907 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:25.617646    4907 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:25.617679    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:25.617702    4907 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:25.617709    4907 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:25.617978    4907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:25.770887    4907 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:25.888479    4907 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:25.888486    4907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:25.888699    4907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2
	I0910 11:06:25.898059    4907 main.go:141] libmachine: STDOUT: 
	I0910 11:06:25.898091    4907 main.go:141] libmachine: STDERR: 
	I0910 11:06:25.898149    4907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2 +20000M
	I0910 11:06:25.906216    4907 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:25.906233    4907 main.go:141] libmachine: STDERR: 
	I0910 11:06:25.906245    4907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2
	I0910 11:06:25.906249    4907 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:25.906260    4907 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:25.906282    4907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:7c:19:c3:7c:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/offline-docker-249000/disk.qcow2
	I0910 11:06:25.907844    4907 main.go:141] libmachine: STDOUT: 
	I0910 11:06:25.907860    4907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:25.907872    4907 client.go:171] duration metric: took 290.335583ms to LocalClient.Create
	I0910 11:06:27.910047    4907 start.go:128] duration metric: took 2.32181025s to createHost
	I0910 11:06:27.910125    4907 start.go:83] releasing machines lock for "offline-docker-249000", held for 2.322035s
	W0910 11:06:27.910593    4907 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:27.919244    4907 out.go:201] 
	W0910 11:06:27.924320    4907 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:06:27.924353    4907 out.go:270] * 
	* 
	W0910 11:06:27.927234    4907 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:06:27.936132    4907 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-249000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-09-10 11:06:27.952345 -0700 PDT m=+2287.414444335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-249000 -n offline-docker-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-249000 -n offline-docker-249000: exit status 7 (67.697334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-249000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-249000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-249000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.253625ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qb2rh" [d1e5edb1-7803-4933-a00b-4e3f52088cd3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011888625s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g9hh4" [ba9d217c-a23d-45ff-985a-a5b541ecc35a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011477333s
addons_test.go:342: (dbg) Run:  kubectl --context addons-592000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-592000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-592000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.053822791s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-592000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 ip
2024/09/10 10:41:56 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-592000 -n addons-592000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-581000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |                     |
	|         | -p download-only-581000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| delete  | -p download-only-581000              | download-only-581000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| start   | -o=json --download-only              | download-only-266000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |                     |
	|         | -p download-only-266000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| delete  | -p download-only-266000              | download-only-266000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| delete  | -p download-only-581000              | download-only-581000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| delete  | -p download-only-266000              | download-only-266000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| start   | --download-only -p                   | binary-mirror-025000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |                     |
	|         | binary-mirror-025000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49313               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-025000              | binary-mirror-025000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| addons  | disable dashboard -p                 | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |                     |
	|         | addons-592000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |                     |
	|         | addons-592000                        |                      |         |         |                     |                     |
	| start   | -p addons-592000 --wait=true         | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:32 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-592000 addons disable         | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:32 PDT | 10 Sep 24 10:32 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-592000 addons                 | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:41 PDT | 10 Sep 24 10:41 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-592000 addons                 | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:41 PDT | 10 Sep 24 10:41 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-592000 addons                 | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:41 PDT | 10 Sep 24 10:41 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-592000 ip                     | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:41 PDT | 10 Sep 24 10:41 PDT |
	| addons  | addons-592000 addons disable         | addons-592000        | jenkins | v1.34.0 | 10 Sep 24 10:41 PDT | 10 Sep 24 10:41 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 10:28:45
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 10:28:45.693155    1875 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:28:45.693283    1875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:45.693287    1875 out.go:358] Setting ErrFile to fd 2...
	I0910 10:28:45.693289    1875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:45.693421    1875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:28:45.694452    1875 out.go:352] Setting JSON to false
	I0910 10:28:45.710674    1875 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1689,"bootTime":1725987636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 10:28:45.710740    1875 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 10:28:45.716195    1875 out.go:177] * [addons-592000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 10:28:45.723155    1875 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 10:28:45.723217    1875 notify.go:220] Checking for updates...
	I0910 10:28:45.730169    1875 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:28:45.733187    1875 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 10:28:45.736130    1875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 10:28:45.737611    1875 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 10:28:45.741132    1875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 10:28:45.744378    1875 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 10:28:45.748011    1875 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 10:28:45.755137    1875 start.go:297] selected driver: qemu2
	I0910 10:28:45.755143    1875 start.go:901] validating driver "qemu2" against <nil>
	I0910 10:28:45.755151    1875 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 10:28:45.757477    1875 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 10:28:45.760196    1875 out.go:177] * Automatically selected the socket_vmnet network
	I0910 10:28:45.763176    1875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 10:28:45.763213    1875 cni.go:84] Creating CNI manager for ""
	I0910 10:28:45.763221    1875 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 10:28:45.763228    1875 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 10:28:45.763256    1875 start.go:340] cluster config:
	{Name:addons-592000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:28:45.766946    1875 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 10:28:45.774154    1875 out.go:177] * Starting "addons-592000" primary control-plane node in "addons-592000" cluster
	I0910 10:28:45.778146    1875 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 10:28:45.778164    1875 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 10:28:45.778172    1875 cache.go:56] Caching tarball of preloaded images
	I0910 10:28:45.778247    1875 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 10:28:45.778258    1875 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 10:28:45.778488    1875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/config.json ...
	I0910 10:28:45.778502    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/config.json: {Name:mk87e5e9c33c7acb2edbaba9065788eced1fe537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:28:45.778927    1875 start.go:360] acquireMachinesLock for addons-592000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 10:28:45.778993    1875 start.go:364] duration metric: took 60.334µs to acquireMachinesLock for "addons-592000"
	I0910 10:28:45.779006    1875 start.go:93] Provisioning new machine with config: &{Name:addons-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 10:28:45.779036    1875 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 10:28:45.782128    1875 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 10:28:46.581661    1875 start.go:159] libmachine.API.Create for "addons-592000" (driver="qemu2")
	I0910 10:28:46.581686    1875 client.go:168] LocalClient.Create starting
	I0910 10:28:46.581845    1875 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 10:28:46.707568    1875 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 10:28:46.788559    1875 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 10:28:47.080203    1875 main.go:141] libmachine: Creating SSH key...
	I0910 10:28:47.229169    1875 main.go:141] libmachine: Creating Disk image...
	I0910 10:28:47.229175    1875 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 10:28:47.229504    1875 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/disk.qcow2
	I0910 10:28:47.249001    1875 main.go:141] libmachine: STDOUT: 
	I0910 10:28:47.249028    1875 main.go:141] libmachine: STDERR: 
	I0910 10:28:47.249083    1875 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/disk.qcow2 +20000M
	I0910 10:28:47.257192    1875 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 10:28:47.257208    1875 main.go:141] libmachine: STDERR: 
	I0910 10:28:47.257224    1875 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/disk.qcow2
	I0910 10:28:47.257228    1875 main.go:141] libmachine: Starting QEMU VM...
	I0910 10:28:47.257269    1875 qemu.go:418] Using hvf for hardware acceleration
	I0910 10:28:47.257293    1875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:45:b2:e1:c9:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/disk.qcow2
	I0910 10:28:47.315476    1875 main.go:141] libmachine: STDOUT: 
	I0910 10:28:47.315506    1875 main.go:141] libmachine: STDERR: 
	I0910 10:28:47.315510    1875 main.go:141] libmachine: Attempt 0
	I0910 10:28:47.315532    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:28:47.315600    1875 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 10:28:47.315617    1875 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e1d328}
	I0910 10:28:49.317748    1875 main.go:141] libmachine: Attempt 1
	I0910 10:28:49.317916    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:28:49.318270    1875 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 10:28:49.318321    1875 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e1d328}
	I0910 10:28:51.320492    1875 main.go:141] libmachine: Attempt 2
	I0910 10:28:51.320584    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:28:51.321010    1875 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 10:28:51.321062    1875 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e1d328}
	I0910 10:28:53.323214    1875 main.go:141] libmachine: Attempt 3
	I0910 10:28:53.323245    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:28:53.323347    1875 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 10:28:53.323396    1875 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e1d328}
	I0910 10:28:55.325378    1875 main.go:141] libmachine: Attempt 4
	I0910 10:28:55.325391    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:28:55.325432    1875 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 10:28:55.325448    1875 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e1d328}
	I0910 10:28:57.327419    1875 main.go:141] libmachine: Attempt 5
	I0910 10:28:57.327431    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:28:57.327535    1875 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 10:28:57.327561    1875 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e1d328}
	I0910 10:28:59.329559    1875 main.go:141] libmachine: Attempt 6
	I0910 10:28:59.329581    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:28:59.329661    1875 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 10:28:59.329670    1875 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e1d328}
	I0910 10:29:01.330677    1875 main.go:141] libmachine: Attempt 7
	I0910 10:29:01.330697    1875 main.go:141] libmachine: Searching for e:45:b2:e1:c9:a0 in /var/db/dhcpd_leases ...
	I0910 10:29:01.330826    1875 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0910 10:29:01.330838    1875 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e:45:b2:e1:c9:a0 ID:1,e:45:b2:e1:c9:a0 Lease:0x66e1d35b}
	I0910 10:29:01.330840    1875 main.go:141] libmachine: Found match: e:45:b2:e1:c9:a0
	I0910 10:29:01.330849    1875 main.go:141] libmachine: IP: 192.168.105.2
	I0910 10:29:01.330854    1875 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0910 10:29:03.352045    1875 machine.go:93] provisionDockerMachine start ...
	I0910 10:29:03.353615    1875 main.go:141] libmachine: Using SSH client type: native
	I0910 10:29:03.354144    1875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100547ba0] 0x10054a400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 10:29:03.354161    1875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 10:29:03.418174    1875 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 10:29:03.418202    1875 buildroot.go:166] provisioning hostname "addons-592000"
	I0910 10:29:03.418352    1875 main.go:141] libmachine: Using SSH client type: native
	I0910 10:29:03.418592    1875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100547ba0] 0x10054a400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 10:29:03.418601    1875 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-592000 && echo "addons-592000" | sudo tee /etc/hostname
	I0910 10:29:03.475785    1875 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-592000
	
	I0910 10:29:03.475841    1875 main.go:141] libmachine: Using SSH client type: native
	I0910 10:29:03.475984    1875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100547ba0] 0x10054a400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 10:29:03.475994    1875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-592000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-592000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-592000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 10:29:03.523940    1875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 10:29:03.523952    1875 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19598-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19598-1276/.minikube}
	I0910 10:29:03.523964    1875 buildroot.go:174] setting up certificates
	I0910 10:29:03.523972    1875 provision.go:84] configureAuth start
	I0910 10:29:03.523975    1875 provision.go:143] copyHostCerts
	I0910 10:29:03.524070    1875 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem (1078 bytes)
	I0910 10:29:03.524310    1875 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem (1123 bytes)
	I0910 10:29:03.524442    1875 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem (1675 bytes)
	I0910 10:29:03.524536    1875 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem org=jenkins.addons-592000 san=[127.0.0.1 192.168.105.2 addons-592000 localhost minikube]
	I0910 10:29:03.637245    1875 provision.go:177] copyRemoteCerts
	I0910 10:29:03.637300    1875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 10:29:03.637318    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:03.660294    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 10:29:03.668596    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 10:29:03.676815    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 10:29:03.685077    1875 provision.go:87] duration metric: took 161.105792ms to configureAuth
	I0910 10:29:03.685086    1875 buildroot.go:189] setting minikube options for container-runtime
	I0910 10:29:03.685184    1875 config.go:182] Loaded profile config "addons-592000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:29:03.685223    1875 main.go:141] libmachine: Using SSH client type: native
	I0910 10:29:03.685310    1875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100547ba0] 0x10054a400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 10:29:03.685315    1875 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 10:29:03.728467    1875 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 10:29:03.728475    1875 buildroot.go:70] root file system type: tmpfs
	I0910 10:29:03.728532    1875 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 10:29:03.728579    1875 main.go:141] libmachine: Using SSH client type: native
	I0910 10:29:03.728688    1875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100547ba0] 0x10054a400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 10:29:03.728722    1875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 10:29:03.777660    1875 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 10:29:03.777704    1875 main.go:141] libmachine: Using SSH client type: native
	I0910 10:29:03.777808    1875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100547ba0] 0x10054a400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 10:29:03.777816    1875 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 10:29:05.144782    1875 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 10:29:05.144796    1875 machine.go:96] duration metric: took 1.792778542s to provisionDockerMachine
	I0910 10:29:05.144802    1875 client.go:171] duration metric: took 18.563708542s to LocalClient.Create
	I0910 10:29:05.144814    1875 start.go:167] duration metric: took 18.56375375s to libmachine.API.Create "addons-592000"
	I0910 10:29:05.144818    1875 start.go:293] postStartSetup for "addons-592000" (driver="qemu2")
	I0910 10:29:05.144824    1875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 10:29:05.144891    1875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 10:29:05.144901    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:05.168464    1875 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 10:29:05.170060    1875 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 10:29:05.170069    1875 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/addons for local assets ...
	I0910 10:29:05.170158    1875 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/files for local assets ...
	I0910 10:29:05.170189    1875 start.go:296] duration metric: took 25.369208ms for postStartSetup
	I0910 10:29:05.170565    1875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/config.json ...
	I0910 10:29:05.170749    1875 start.go:128] duration metric: took 19.3923325s to createHost
	I0910 10:29:05.170775    1875 main.go:141] libmachine: Using SSH client type: native
	I0910 10:29:05.170916    1875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100547ba0] 0x10054a400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 10:29:05.170920    1875 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 10:29:05.215407    1875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725989344.976501419
	
	I0910 10:29:05.215418    1875 fix.go:216] guest clock: 1725989344.976501419
	I0910 10:29:05.215422    1875 fix.go:229] Guest: 2024-09-10 10:29:04.976501419 -0700 PDT Remote: 2024-09-10 10:29:05.170752 -0700 PDT m=+19.497155668 (delta=-194.250581ms)
	I0910 10:29:05.215433    1875 fix.go:200] guest clock delta is within tolerance: -194.250581ms
	I0910 10:29:05.215436    1875 start.go:83] releasing machines lock for "addons-592000", held for 19.437062166s
	I0910 10:29:05.215764    1875 ssh_runner.go:195] Run: cat /version.json
	I0910 10:29:05.215774    1875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 10:29:05.215773    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:05.215799    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:05.283675    1875 ssh_runner.go:195] Run: systemctl --version
	I0910 10:29:05.286062    1875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 10:29:05.288100    1875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 10:29:05.288124    1875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 10:29:05.294295    1875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 10:29:05.294303    1875 start.go:495] detecting cgroup driver to use...
	I0910 10:29:05.294420    1875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 10:29:05.300899    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 10:29:05.304286    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 10:29:05.308025    1875 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 10:29:05.308049    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 10:29:05.311727    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 10:29:05.315637    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 10:29:05.319435    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 10:29:05.323299    1875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 10:29:05.327310    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 10:29:05.331272    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 10:29:05.335328    1875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 10:29:05.339376    1875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 10:29:05.343093    1875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 10:29:05.346942    1875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 10:29:05.430139    1875 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 10:29:05.437086    1875 start.go:495] detecting cgroup driver to use...
	I0910 10:29:05.437158    1875 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 10:29:05.445022    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 10:29:05.452202    1875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 10:29:05.458971    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 10:29:05.464729    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 10:29:05.469729    1875 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 10:29:05.513701    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 10:29:05.519826    1875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 10:29:05.526422    1875 ssh_runner.go:195] Run: which cri-dockerd
	I0910 10:29:05.527805    1875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 10:29:05.531274    1875 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 10:29:05.537184    1875 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 10:29:05.617146    1875 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 10:29:05.697824    1875 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 10:29:05.697887    1875 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 10:29:05.703892    1875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 10:29:05.784342    1875 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 10:29:07.968861    1875 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.1845685s)
	I0910 10:29:07.968938    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 10:29:07.974357    1875 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0910 10:29:07.981366    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 10:29:07.986804    1875 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 10:29:08.070183    1875 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 10:29:08.167023    1875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 10:29:08.249736    1875 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 10:29:08.256552    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 10:29:08.261771    1875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 10:29:08.346509    1875 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 10:29:08.372514    1875 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 10:29:08.372631    1875 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 10:29:08.374874    1875 start.go:563] Will wait 60s for crictl version
	I0910 10:29:08.374917    1875 ssh_runner.go:195] Run: which crictl
	I0910 10:29:08.376398    1875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 10:29:08.394745    1875 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 10:29:08.394816    1875 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 10:29:08.406266    1875 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 10:29:08.423595    1875 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 10:29:08.423739    1875 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0910 10:29:08.425520    1875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 10:29:08.429979    1875 kubeadm.go:883] updating cluster {Name:addons-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:addons-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 10:29:08.430030    1875 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 10:29:08.430073    1875 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 10:29:08.435068    1875 docker.go:685] Got preloaded images: 
	I0910 10:29:08.435078    1875 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0910 10:29:08.435114    1875 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 10:29:08.438873    1875 ssh_runner.go:195] Run: which lz4
	I0910 10:29:08.440318    1875 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 10:29:08.441769    1875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 10:29:08.441781    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322549298 bytes)
	I0910 10:29:09.694769    1875 docker.go:649] duration metric: took 1.254526542s to copy over tarball
	I0910 10:29:09.694840    1875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 10:29:10.678670    1875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 10:29:10.693909    1875 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 10:29:10.697900    1875 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0910 10:29:10.703646    1875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 10:29:10.790245    1875 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 10:29:13.014626    1875 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.224432833s)
	I0910 10:29:13.014735    1875 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 10:29:13.021237    1875 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 10:29:13.021251    1875 cache_images.go:84] Images are preloaded, skipping loading
	I0910 10:29:13.021256    1875 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.0 docker true true} ...
	I0910 10:29:13.021321    1875 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-592000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 10:29:13.021383    1875 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 10:29:13.042870    1875 cni.go:84] Creating CNI manager for ""
	I0910 10:29:13.042884    1875 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 10:29:13.042888    1875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 10:29:13.042898    1875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-592000 NodeName:addons-592000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 10:29:13.042956    1875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-592000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 10:29:13.043006    1875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 10:29:13.047075    1875 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 10:29:13.047102    1875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 10:29:13.050802    1875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 10:29:13.056441    1875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 10:29:13.062303    1875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0910 10:29:13.068551    1875 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0910 10:29:13.069957    1875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 10:29:13.074192    1875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 10:29:13.154629    1875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 10:29:13.161446    1875 certs.go:68] Setting up /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000 for IP: 192.168.105.2
	I0910 10:29:13.161465    1875 certs.go:194] generating shared ca certs ...
	I0910 10:29:13.161476    1875 certs.go:226] acquiring lock for ca certs: {Name:mk5b237e8da18ff05d2622f0be5a14dbe0d9b4f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.161649    1875 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key
	I0910 10:29:13.263897    1875 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt ...
	I0910 10:29:13.263907    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt: {Name:mkd39bb2339b56e696cde1c2228697a7fee6c743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.264213    1875 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key ...
	I0910 10:29:13.264218    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key: {Name:mk219de00d005948feef2cf1f0925d643ba417cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.264347    1875 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key
	I0910 10:29:13.455808    1875 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.crt ...
	I0910 10:29:13.455817    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.crt: {Name:mkf2cce791861831a4bc24fcb04522b1b5874c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.455997    1875 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key ...
	I0910 10:29:13.456001    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key: {Name:mk1f9ba151bf80535c9ea2645d742d7b3e4744f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.456165    1875 certs.go:256] generating profile certs ...
	I0910 10:29:13.456214    1875 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.key
	I0910 10:29:13.456221    1875 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt with IP's: []
	I0910 10:29:13.536593    1875 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt ...
	I0910 10:29:13.536602    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: {Name:mkfa8e28994e8092488ddd9780811f518fc5e161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.536839    1875 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.key ...
	I0910 10:29:13.536846    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.key: {Name:mke1771ae08dcf943b5f9c9216b7f1ac96c469f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.536977    1875 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.key.eb53f3ee
	I0910 10:29:13.536990    1875 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.crt.eb53f3ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0910 10:29:13.693495    1875 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.crt.eb53f3ee ...
	I0910 10:29:13.693500    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.crt.eb53f3ee: {Name:mkf3c725a80df3045b8a156a0852d1b94b4b48d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.693658    1875 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.key.eb53f3ee ...
	I0910 10:29:13.693662    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.key.eb53f3ee: {Name:mke9373159797bf03c0e864c179b0fd670adc4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.693788    1875 certs.go:381] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.crt.eb53f3ee -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.crt
	I0910 10:29:13.693894    1875 certs.go:385] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.key.eb53f3ee -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.key
	I0910 10:29:13.693988    1875 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.key
	I0910 10:29:13.693999    1875 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.crt with IP's: []
	I0910 10:29:13.759179    1875 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.crt ...
	I0910 10:29:13.759183    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.crt: {Name:mkaace6fa55aba2bf4801f67f833a141bb8bf43d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.759326    1875 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.key ...
	I0910 10:29:13.759329    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.key: {Name:mk283f60dd1edc7fbbd58d0e42c4323997cc84c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:13.759604    1875 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 10:29:13.759629    1875 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem (1078 bytes)
	I0910 10:29:13.759650    1875 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem (1123 bytes)
	I0910 10:29:13.759671    1875 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem (1675 bytes)
	I0910 10:29:13.760196    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 10:29:13.769313    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 10:29:13.777591    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 10:29:13.785744    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 10:29:13.794593    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 10:29:13.803445    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 10:29:13.812115    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 10:29:13.825226    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 10:29:13.833983    1875 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 10:29:13.842298    1875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 10:29:13.849866    1875 ssh_runner.go:195] Run: openssl version
	I0910 10:29:13.852141    1875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 10:29:13.855894    1875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 10:29:13.857523    1875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 10:29:13.857545    1875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 10:29:13.859597    1875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 10:29:13.863492    1875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 10:29:13.865038    1875 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 10:29:13.865084    1875 kubeadm.go:392] StartCluster: {Name:addons-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:addons-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:29:13.865150    1875 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 10:29:13.871023    1875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 10:29:13.875058    1875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 10:29:13.878530    1875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 10:29:13.882000    1875 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 10:29:13.882005    1875 kubeadm.go:157] found existing configuration files:
	
	I0910 10:29:13.882028    1875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 10:29:13.885437    1875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 10:29:13.885464    1875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 10:29:13.888624    1875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 10:29:13.891713    1875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 10:29:13.891736    1875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 10:29:13.895212    1875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 10:29:13.898735    1875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 10:29:13.898760    1875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 10:29:13.902333    1875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 10:29:13.905748    1875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 10:29:13.905774    1875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 10:29:13.908867    1875 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 10:29:13.930521    1875 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 10:29:13.930552    1875 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 10:29:13.967016    1875 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 10:29:13.967069    1875 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 10:29:13.967114    1875 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 10:29:13.971365    1875 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 10:29:13.991396    1875 out.go:235]   - Generating certificates and keys ...
	I0910 10:29:13.991433    1875 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 10:29:13.991480    1875 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 10:29:14.120764    1875 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 10:29:14.159414    1875 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 10:29:14.229495    1875 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 10:29:14.434717    1875 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 10:29:14.473023    1875 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 10:29:14.473087    1875 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-592000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0910 10:29:14.612695    1875 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 10:29:14.612768    1875 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-592000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0910 10:29:14.708108    1875 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 10:29:14.851201    1875 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 10:29:14.943176    1875 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 10:29:14.943209    1875 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 10:29:15.119708    1875 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 10:29:15.228258    1875 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 10:29:15.496556    1875 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 10:29:15.540057    1875 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 10:29:15.577134    1875 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 10:29:15.577374    1875 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 10:29:15.578543    1875 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 10:29:15.582761    1875 out.go:235]   - Booting up control plane ...
	I0910 10:29:15.582809    1875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 10:29:15.582846    1875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 10:29:15.582887    1875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 10:29:15.586154    1875 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 10:29:15.588794    1875 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 10:29:15.588834    1875 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 10:29:15.672333    1875 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 10:29:15.672414    1875 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 10:29:16.177434    1875 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.576875ms
	I0910 10:29:16.177640    1875 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 10:29:19.177902    1875 kubeadm.go:310] [api-check] The API server is healthy after 3.001294835s
	I0910 10:29:19.185308    1875 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 10:29:19.192126    1875 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 10:29:19.199884    1875 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 10:29:19.199979    1875 kubeadm.go:310] [mark-control-plane] Marking the node addons-592000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 10:29:19.202726    1875 kubeadm.go:310] [bootstrap-token] Using token: xr4i0x.3t39gk90u27t7j6s
	I0910 10:29:19.209329    1875 out.go:235]   - Configuring RBAC rules ...
	I0910 10:29:19.209393    1875 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 10:29:19.210326    1875 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 10:29:19.217142    1875 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 10:29:19.218049    1875 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 10:29:19.219102    1875 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 10:29:19.220349    1875 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 10:29:19.590548    1875 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 10:29:20.006707    1875 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 10:29:20.585372    1875 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 10:29:20.586248    1875 kubeadm.go:310] 
	I0910 10:29:20.586375    1875 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 10:29:20.586391    1875 kubeadm.go:310] 
	I0910 10:29:20.586512    1875 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 10:29:20.586524    1875 kubeadm.go:310] 
	I0910 10:29:20.586554    1875 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 10:29:20.586653    1875 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 10:29:20.586723    1875 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 10:29:20.586736    1875 kubeadm.go:310] 
	I0910 10:29:20.586812    1875 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 10:29:20.586819    1875 kubeadm.go:310] 
	I0910 10:29:20.586882    1875 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 10:29:20.586891    1875 kubeadm.go:310] 
	I0910 10:29:20.586959    1875 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 10:29:20.587057    1875 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 10:29:20.587158    1875 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 10:29:20.587172    1875 kubeadm.go:310] 
	I0910 10:29:20.587292    1875 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 10:29:20.587403    1875 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 10:29:20.587411    1875 kubeadm.go:310] 
	I0910 10:29:20.587553    1875 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xr4i0x.3t39gk90u27t7j6s \
	I0910 10:29:20.587669    1875 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 \
	I0910 10:29:20.587706    1875 kubeadm.go:310] 	--control-plane 
	I0910 10:29:20.587713    1875 kubeadm.go:310] 
	I0910 10:29:20.587832    1875 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 10:29:20.587842    1875 kubeadm.go:310] 
	I0910 10:29:20.588010    1875 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xr4i0x.3t39gk90u27t7j6s \
	I0910 10:29:20.588142    1875 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 
	I0910 10:29:20.588564    1875 kubeadm.go:310] W0910 17:29:13.690733    1578 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 10:29:20.588980    1875 kubeadm.go:310] W0910 17:29:13.691061    1578 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 10:29:20.589118    1875 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 10:29:20.589134    1875 cni.go:84] Creating CNI manager for ""
	I0910 10:29:20.589149    1875 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 10:29:20.593465    1875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 10:29:20.595224    1875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 10:29:20.603394    1875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 10:29:20.615194    1875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 10:29:20.615317    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:20.615341    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-592000 minikube.k8s.io/updated_at=2024_09_10T10_29_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-592000 minikube.k8s.io/primary=true
	I0910 10:29:20.629136    1875 ops.go:34] apiserver oom_adj: -16
	I0910 10:29:20.692007    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:21.194188    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:21.694294    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:22.194152    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:22.694109    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:23.194082    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:23.693404    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:24.194060    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:24.694007    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:25.194010    1875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 10:29:25.242015    1875 kubeadm.go:1113] duration metric: took 4.626950333s to wait for elevateKubeSystemPrivileges
	I0910 10:29:25.242031    1875 kubeadm.go:394] duration metric: took 11.377317791s to StartCluster
	I0910 10:29:25.242042    1875 settings.go:142] acquiring lock: {Name:mkc4479acb7c6185024679cd35acf0055f682c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:25.242217    1875 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:29:25.242409    1875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:29:25.242619    1875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 10:29:25.242652    1875 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 10:29:25.242693    1875 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 10:29:25.242768    1875 addons.go:69] Setting yakd=true in profile "addons-592000"
	I0910 10:29:25.242775    1875 addons.go:69] Setting inspektor-gadget=true in profile "addons-592000"
	I0910 10:29:25.242785    1875 addons.go:69] Setting metrics-server=true in profile "addons-592000"
	I0910 10:29:25.242797    1875 addons.go:234] Setting addon inspektor-gadget=true in "addons-592000"
	I0910 10:29:25.242803    1875 addons.go:234] Setting addon metrics-server=true in "addons-592000"
	I0910 10:29:25.242808    1875 config.go:182] Loaded profile config "addons-592000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:29:25.242813    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.242814    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.242834    1875 addons.go:69] Setting cloud-spanner=true in profile "addons-592000"
	I0910 10:29:25.242830    1875 addons.go:69] Setting default-storageclass=true in profile "addons-592000"
	I0910 10:29:25.242839    1875 addons.go:69] Setting ingress-dns=true in profile "addons-592000"
	I0910 10:29:25.242861    1875 addons.go:69] Setting registry=true in profile "addons-592000"
	I0910 10:29:25.242868    1875 addons.go:234] Setting addon registry=true in "addons-592000"
	I0910 10:29:25.242877    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.242824    1875 addons.go:69] Setting ingress=true in profile "addons-592000"
	I0910 10:29:25.242846    1875 addons.go:234] Setting addon cloud-spanner=true in "addons-592000"
	I0910 10:29:25.242908    1875 addons.go:234] Setting addon ingress=true in "addons-592000"
	I0910 10:29:25.242917    1875 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-592000"
	I0910 10:29:25.242931    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.242947    1875 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-592000"
	I0910 10:29:25.242883    1875 addons.go:234] Setting addon ingress-dns=true in "addons-592000"
	I0910 10:29:25.242987    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.242782    1875 addons.go:234] Setting addon yakd=true in "addons-592000"
	I0910 10:29:25.243120    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.243122    1875 retry.go:31] will retry after 1.407518328s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.242895    1875 addons.go:69] Setting volcano=true in profile "addons-592000"
	I0910 10:29:25.243200    1875 addons.go:234] Setting addon volcano=true in "addons-592000"
	I0910 10:29:25.243211    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.242848    1875 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-592000"
	I0910 10:29:25.243262    1875 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-592000"
	I0910 10:29:25.243271    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.243330    1875 retry.go:31] will retry after 1.179575214s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.242858    1875 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-592000"
	I0910 10:29:25.243361    1875 retry.go:31] will retry after 629.817274ms: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243357    1875 retry.go:31] will retry after 1.017527547s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243372    1875 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-592000"
	I0910 10:29:25.242855    1875 addons.go:69] Setting storage-provisioner=true in profile "addons-592000"
	I0910 10:29:25.243381    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.243394    1875 addons.go:234] Setting addon storage-provisioner=true in "addons-592000"
	I0910 10:29:25.243418    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.242851    1875 addons.go:69] Setting gcp-auth=true in profile "addons-592000"
	I0910 10:29:25.243475    1875 mustload.go:65] Loading cluster: addons-592000
	I0910 10:29:25.243501    1875 retry.go:31] will retry after 1.301114102s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.242950    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.243549    1875 config.go:182] Loaded profile config "addons-592000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:29:25.243613    1875 retry.go:31] will retry after 529.688491ms: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243614    1875 retry.go:31] will retry after 1.309123471s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243111    1875 addons.go:69] Setting volumesnapshots=true in profile "addons-592000"
	I0910 10:29:25.243624    1875 addons.go:234] Setting addon volumesnapshots=true in "addons-592000"
	I0910 10:29:25.243187    1875 retry.go:31] will retry after 652.966574ms: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.242887    1875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-592000"
	I0910 10:29:25.243631    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.243682    1875 retry.go:31] will retry after 1.253957797s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243760    1875 retry.go:31] will retry after 744.583065ms: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243803    1875 retry.go:31] will retry after 1.484478212s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243826    1875 retry.go:31] will retry after 1.083326325s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.243865    1875 retry.go:31] will retry after 1.032102827s: connect: dial unix /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/monitor: connect: connection refused
	I0910 10:29:25.246709    1875 out.go:177] * Verifying Kubernetes components...
	I0910 10:29:25.254648    1875 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 10:29:25.257560    1875 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 10:29:25.257611    1875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 10:29:25.261629    1875 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 10:29:25.261634    1875 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 10:29:25.261641    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:25.264533    1875 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 10:29:25.264538    1875 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 10:29:25.264544    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:25.293827    1875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 10:29:25.359483    1875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 10:29:25.413349    1875 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 10:29:25.413359    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 10:29:25.420889    1875 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 10:29:25.420901    1875 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 10:29:25.423989    1875 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 10:29:25.423996    1875 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 10:29:25.430754    1875 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 10:29:25.430763    1875 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 10:29:25.436620    1875 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 10:29:25.436630    1875 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 10:29:25.440454    1875 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 10:29:25.440461    1875 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 10:29:25.454329    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 10:29:25.459326    1875 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 10:29:25.459335    1875 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 10:29:25.465329    1875 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 10:29:25.465339    1875 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 10:29:25.487377    1875 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0910 10:29:25.487746    1875 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 10:29:25.487755    1875 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 10:29:25.488830    1875 node_ready.go:35] waiting up to 6m0s for node "addons-592000" to be "Ready" ...
	I0910 10:29:25.494251    1875 node_ready.go:49] node "addons-592000" has status "Ready":"True"
	I0910 10:29:25.494267    1875 node_ready.go:38] duration metric: took 5.418041ms for node "addons-592000" to be "Ready" ...
	I0910 10:29:25.494271    1875 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 10:29:25.508676    1875 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 10:29:25.508685    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 10:29:25.509381    1875 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:25.531463    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 10:29:25.777858    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 10:29:25.786816    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 10:29:25.793830    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 10:29:25.802800    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 10:29:25.809783    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 10:29:25.815885    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 10:29:25.822816    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 10:29:25.825770    1875 addons.go:475] Verifying addon metrics-server=true in "addons-592000"
	I0910 10:29:25.828876    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 10:29:25.831866    1875 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 10:29:25.831876    1875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 10:29:25.831896    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:25.864567    1875 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 10:29:25.864578    1875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 10:29:25.871782    1875 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 10:29:25.871798    1875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 10:29:25.874154    1875 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-592000"
	I0910 10:29:25.874172    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.877875    1875 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 10:29:25.886801    1875 out.go:177]   - Using image docker.io/busybox:stable
	I0910 10:29:25.891455    1875 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 10:29:25.891468    1875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 10:29:25.894003    1875 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 10:29:25.894011    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 10:29:25.894022    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:25.901776    1875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 10:29:25.909839    1875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 10:29:25.912762    1875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 10:29:25.916982    1875 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 10:29:25.916991    1875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 10:29:25.918935    1875 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 10:29:25.918939    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 10:29:25.918946    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:25.925670    1875 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 10:29:25.925680    1875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 10:29:25.932716    1875 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 10:29:25.932728    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 10:29:25.943548    1875 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 10:29:25.943558    1875 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 10:29:25.947450    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 10:29:25.950135    1875 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 10:29:25.950142    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 10:29:25.959577    1875 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 10:29:25.959590    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 10:29:25.985993    1875 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 10:29:25.986007    1875 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 10:29:25.989354    1875 addons.go:234] Setting addon default-storageclass=true in "addons-592000"
	I0910 10:29:25.989373    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:25.989972    1875 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 10:29:25.989979    1875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 10:29:25.989985    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:25.990531    1875 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-592000" context rescaled to 1 replicas
	I0910 10:29:25.999760    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 10:29:26.018467    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 10:29:26.154011    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 10:29:26.266887    1875 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 10:29:26.270664    1875 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 10:29:26.270674    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 10:29:26.270684    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.281784    1875 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 10:29:26.285812    1875 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 10:29:26.285822    1875 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 10:29:26.285835    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.331802    1875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 10:29:26.335882    1875 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 10:29:26.335891    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 10:29:26.335902    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.426770    1875 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 10:29:26.429749    1875 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 10:29:26.429758    1875 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 10:29:26.429770    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.483200    1875 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 10:29:26.483211    1875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 10:29:26.496686    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 10:29:26.498116    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 10:29:26.498356    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:26.518977    1875 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 10:29:26.518988    1875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 10:29:26.526875    1875 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 10:29:26.526887    1875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 10:29:26.534512    1875 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 10:29:26.534527    1875 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 10:29:26.542294    1875 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 10:29:26.542304    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 10:29:26.542935    1875 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 10:29:26.542942    1875 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 10:29:26.549747    1875 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0910 10:29:26.560761    1875 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0910 10:29:26.564834    1875 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0910 10:29:26.571183    1875 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 10:29:26.571193    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0910 10:29:26.571203    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.573795    1875 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 10:29:26.576851    1875 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 10:29:26.576862    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 10:29:26.576873    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.581960    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 10:29:26.599952    1875 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 10:29:26.599964    1875 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 10:29:26.655758    1875 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 10:29:26.659798    1875 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 10:29:26.663821    1875 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 10:29:26.663829    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 10:29:26.663839    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.664106    1875 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 10:29:26.664113    1875 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 10:29:26.664342    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 10:29:26.675344    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 10:29:26.720214    1875 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 10:29:26.720224    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 10:29:26.733699    1875 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 10:29:26.737652    1875 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 10:29:26.737663    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 10:29:26.737674    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:26.795471    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 10:29:26.962990    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 10:29:27.048487    1875 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 10:29:27.048501    1875 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 10:29:27.137710    1875 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 10:29:27.137721    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 10:29:27.339548    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 10:29:27.513657    1875 pod_ready.go:103] pod "etcd-addons-592000" in "kube-system" namespace has status "Ready":"False"
	I0910 10:29:28.700590    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.700902583s)
	I0910 10:29:28.700608    1875 addons.go:475] Verifying addon ingress=true in "addons-592000"
	I0910 10:29:28.706112    1875 out.go:177] * Verifying ingress addon...
	I0910 10:29:28.712472    1875 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 10:29:28.715998    1875 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 10:29:28.716006    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:29.071090    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.0527005s)
	I0910 10:29:29.071109    1875 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-592000"
	I0910 10:29:29.071114    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.917182667s)
	I0910 10:29:29.071234    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.573194125s)
	I0910 10:29:29.071252    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.574596s)
	I0910 10:29:29.071276    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.489388292s)
	W0910 10:29:29.071288    1875 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 10:29:29.071306    1875 retry.go:31] will retry after 262.491846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 10:29:29.075086    1875 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 10:29:29.084487    1875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 10:29:29.099461    1875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 10:29:29.099474    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:29.243376    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:29.335941    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 10:29:29.539816    1875 pod_ready.go:103] pod "etcd-addons-592000" in "kube-system" namespace has status "Ready":"False"
	I0910 10:29:29.625953    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:29.718347    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:30.013626    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.3493785s)
	I0910 10:29:30.013645    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.338396s)
	I0910 10:29:30.013668    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.218289375s)
	I0910 10:29:30.013731    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.674261083s)
	I0910 10:29:30.013739    1875 addons.go:475] Verifying addon registry=true in "addons-592000"
	I0910 10:29:30.013722    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.050820917s)
	I0910 10:29:30.020049    1875 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-592000 service yakd-dashboard -n yakd-dashboard
	
	I0910 10:29:30.026027    1875 out.go:177] * Verifying registry addon...
	I0910 10:29:30.033495    1875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 10:29:30.061372    1875 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 10:29:30.061380    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:30.184413    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:30.265272    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:30.513603    1875 pod_ready.go:93] pod "etcd-addons-592000" in "kube-system" namespace has status "Ready":"True"
	I0910 10:29:30.513613    1875 pod_ready.go:82] duration metric: took 5.004384459s for pod "etcd-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:30.513617    1875 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:30.515962    1875 pod_ready.go:93] pod "kube-apiserver-addons-592000" in "kube-system" namespace has status "Ready":"True"
	I0910 10:29:30.515970    1875 pod_ready.go:82] duration metric: took 2.349875ms for pod "kube-apiserver-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:30.515976    1875 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:30.518432    1875 pod_ready.go:93] pod "kube-controller-manager-addons-592000" in "kube-system" namespace has status "Ready":"True"
	I0910 10:29:30.518443    1875 pod_ready.go:82] duration metric: took 2.463958ms for pod "kube-controller-manager-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:30.518447    1875 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:30.537374    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:30.588684    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:30.716160    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:31.037028    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:31.088502    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:31.217083    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:31.536816    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:31.635489    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:31.718642    1875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.382757958s)
	I0910 10:29:31.732742    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:32.037046    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:32.088687    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:32.216734    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:32.522772    1875 pod_ready.go:103] pod "kube-scheduler-addons-592000" in "kube-system" namespace has status "Ready":"False"
	I0910 10:29:32.536834    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:32.588494    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:32.715412    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:33.035805    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:33.088266    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:33.216438    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:33.522541    1875 pod_ready.go:93] pod "kube-scheduler-addons-592000" in "kube-system" namespace has status "Ready":"True"
	I0910 10:29:33.522549    1875 pod_ready.go:82] duration metric: took 3.004195125s for pod "kube-scheduler-addons-592000" in "kube-system" namespace to be "Ready" ...
	I0910 10:29:33.522554    1875 pod_ready.go:39] duration metric: took 8.028536166s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 10:29:33.522564    1875 api_server.go:52] waiting for apiserver process to appear ...
	I0910 10:29:33.522627    1875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 10:29:33.529662    1875 api_server.go:72] duration metric: took 8.287261125s to wait for apiserver process to appear ...
	I0910 10:29:33.529671    1875 api_server.go:88] waiting for apiserver healthz status ...
	I0910 10:29:33.529678    1875 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0910 10:29:33.532880    1875 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0910 10:29:33.533474    1875 api_server.go:141] control plane version: v1.31.0
	I0910 10:29:33.533480    1875 api_server.go:131] duration metric: took 3.806834ms to wait for apiserver health ...
	I0910 10:29:33.533483    1875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 10:29:33.534826    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:33.538593    1875 system_pods.go:59] 17 kube-system pods found
	I0910 10:29:33.538605    1875 system_pods.go:61] "coredns-6f6b679f8f-7gqz8" [2ed6156d-968e-407d-8209-753a386c8d92] Running
	I0910 10:29:33.538623    1875 system_pods.go:61] "csi-hostpath-attacher-0" [3e87549f-5b38-4b5f-a840-f22a17e0b278] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 10:29:33.538626    1875 system_pods.go:61] "csi-hostpath-resizer-0" [7a272698-29ab-4821-bd8c-d61c043c577b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 10:29:33.538630    1875 system_pods.go:61] "csi-hostpathplugin-mhv2g" [b9701d73-7500-448f-96d4-20dbfeadf277] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 10:29:33.538632    1875 system_pods.go:61] "etcd-addons-592000" [75b4543a-5eb5-4136-80d7-20e9823f29e4] Running
	I0910 10:29:33.538634    1875 system_pods.go:61] "kube-apiserver-addons-592000" [29d697a4-3888-4401-a128-db39d9a52651] Running
	I0910 10:29:33.538636    1875 system_pods.go:61] "kube-controller-manager-addons-592000" [30a7f73a-3755-4cad-b1f8-eac90a68993b] Running
	I0910 10:29:33.538638    1875 system_pods.go:61] "kube-ingress-dns-minikube" [b53a1c20-7048-458c-8b0a-840a5da3e482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0910 10:29:33.538640    1875 system_pods.go:61] "kube-proxy-nsw7h" [ed2201cf-7ceb-459a-9741-640454ee88ed] Running
	I0910 10:29:33.538642    1875 system_pods.go:61] "kube-scheduler-addons-592000" [6d4ed535-36f8-499f-86ca-032a298a62c0] Running
	I0910 10:29:33.538644    1875 system_pods.go:61] "metrics-server-84c5f94fbc-sb6ns" [6ef6de4d-79f9-4779-971b-4671e55ffe5a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 10:29:33.538648    1875 system_pods.go:61] "nvidia-device-plugin-daemonset-pzndx" [0b9dc429-c9ee-40ac-82cb-a97095b45450] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0910 10:29:33.538650    1875 system_pods.go:61] "registry-66c9cd494c-qb2rh" [d1e5edb1-7803-4933-a00b-4e3f52088cd3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 10:29:33.538653    1875 system_pods.go:61] "registry-proxy-g9hh4" [ba9d217c-a23d-45ff-985a-a5b541ecc35a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 10:29:33.538655    1875 system_pods.go:61] "snapshot-controller-56fcc65765-8nqdw" [e1320020-03d3-4aaf-b5f5-c8a008c81d85] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 10:29:33.538658    1875 system_pods.go:61] "snapshot-controller-56fcc65765-d6h7n" [153f523e-fdb8-4cd5-8cbf-d5d39bc75e3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 10:29:33.538660    1875 system_pods.go:61] "storage-provisioner" [bfb0c638-eea7-4ec1-b500-9f128a03a3fe] Running
	I0910 10:29:33.538662    1875 system_pods.go:74] duration metric: took 5.176208ms to wait for pod list to return data ...
	I0910 10:29:33.538666    1875 default_sa.go:34] waiting for default service account to be created ...
	I0910 10:29:33.539689    1875 default_sa.go:45] found service account: "default"
	I0910 10:29:33.539696    1875 default_sa.go:55] duration metric: took 1.026917ms for default service account to be created ...
	I0910 10:29:33.539699    1875 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 10:29:33.544289    1875 system_pods.go:86] 17 kube-system pods found
	I0910 10:29:33.544297    1875 system_pods.go:89] "coredns-6f6b679f8f-7gqz8" [2ed6156d-968e-407d-8209-753a386c8d92] Running
	I0910 10:29:33.544301    1875 system_pods.go:89] "csi-hostpath-attacher-0" [3e87549f-5b38-4b5f-a840-f22a17e0b278] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 10:29:33.544304    1875 system_pods.go:89] "csi-hostpath-resizer-0" [7a272698-29ab-4821-bd8c-d61c043c577b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 10:29:33.544307    1875 system_pods.go:89] "csi-hostpathplugin-mhv2g" [b9701d73-7500-448f-96d4-20dbfeadf277] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 10:29:33.544310    1875 system_pods.go:89] "etcd-addons-592000" [75b4543a-5eb5-4136-80d7-20e9823f29e4] Running
	I0910 10:29:33.544313    1875 system_pods.go:89] "kube-apiserver-addons-592000" [29d697a4-3888-4401-a128-db39d9a52651] Running
	I0910 10:29:33.544315    1875 system_pods.go:89] "kube-controller-manager-addons-592000" [30a7f73a-3755-4cad-b1f8-eac90a68993b] Running
	I0910 10:29:33.544319    1875 system_pods.go:89] "kube-ingress-dns-minikube" [b53a1c20-7048-458c-8b0a-840a5da3e482] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0910 10:29:33.544321    1875 system_pods.go:89] "kube-proxy-nsw7h" [ed2201cf-7ceb-459a-9741-640454ee88ed] Running
	I0910 10:29:33.544324    1875 system_pods.go:89] "kube-scheduler-addons-592000" [6d4ed535-36f8-499f-86ca-032a298a62c0] Running
	I0910 10:29:33.544326    1875 system_pods.go:89] "metrics-server-84c5f94fbc-sb6ns" [6ef6de4d-79f9-4779-971b-4671e55ffe5a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 10:29:33.544330    1875 system_pods.go:89] "nvidia-device-plugin-daemonset-pzndx" [0b9dc429-c9ee-40ac-82cb-a97095b45450] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0910 10:29:33.544334    1875 system_pods.go:89] "registry-66c9cd494c-qb2rh" [d1e5edb1-7803-4933-a00b-4e3f52088cd3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 10:29:33.544336    1875 system_pods.go:89] "registry-proxy-g9hh4" [ba9d217c-a23d-45ff-985a-a5b541ecc35a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 10:29:33.544339    1875 system_pods.go:89] "snapshot-controller-56fcc65765-8nqdw" [e1320020-03d3-4aaf-b5f5-c8a008c81d85] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 10:29:33.544342    1875 system_pods.go:89] "snapshot-controller-56fcc65765-d6h7n" [153f523e-fdb8-4cd5-8cbf-d5d39bc75e3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 10:29:33.544344    1875 system_pods.go:89] "storage-provisioner" [bfb0c638-eea7-4ec1-b500-9f128a03a3fe] Running
	I0910 10:29:33.544348    1875 system_pods.go:126] duration metric: took 4.645584ms to wait for k8s-apps to be running ...
	I0910 10:29:33.544351    1875 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 10:29:33.544395    1875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 10:29:33.550335    1875 system_svc.go:56] duration metric: took 5.981834ms WaitForService to wait for kubelet
	I0910 10:29:33.550345    1875 kubeadm.go:582] duration metric: took 8.307946167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 10:29:33.550357    1875 node_conditions.go:102] verifying NodePressure condition ...
	I0910 10:29:33.552029    1875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 10:29:33.552040    1875 node_conditions.go:123] node cpu capacity is 2
	I0910 10:29:33.552046    1875 node_conditions.go:105] duration metric: took 1.686958ms to run NodePressure ...
	I0910 10:29:33.552052    1875 start.go:241] waiting for startup goroutines ...
	I0910 10:29:33.588718    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:33.716125    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:34.037430    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:34.087318    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:34.216634    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:34.535827    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:34.588643    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:34.716076    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:34.906321    1875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 10:29:34.906339    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:34.933315    1875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 10:29:34.940018    1875 addons.go:234] Setting addon gcp-auth=true in "addons-592000"
	I0910 10:29:34.940041    1875 host.go:66] Checking if "addons-592000" exists ...
	I0910 10:29:34.940761    1875 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 10:29:34.940769    1875 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/addons-592000/id_rsa Username:docker}
	I0910 10:29:34.966543    1875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 10:29:34.970322    1875 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 10:29:34.974476    1875 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 10:29:34.974482    1875 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 10:29:34.980478    1875 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 10:29:34.980486    1875 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 10:29:34.986522    1875 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 10:29:34.986529    1875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 10:29:34.993772    1875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 10:29:35.037304    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:35.137040    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:35.215987    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:35.311642    1875 addons.go:475] Verifying addon gcp-auth=true in "addons-592000"
	I0910 10:29:35.315607    1875 out.go:177] * Verifying gcp-auth addon...
	I0910 10:29:35.323004    1875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 10:29:35.324259    1875 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 10:29:35.537178    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:35.588860    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:35.714583    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:36.041552    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:36.141897    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:36.239795    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:36.537505    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:36.589359    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:36.716585    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:37.038675    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:37.090078    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:37.218105    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:37.537272    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:37.588694    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:37.716351    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:38.036974    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:38.088068    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:38.215899    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:38.537092    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:38.588120    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:38.716232    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:39.036994    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:39.088044    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:39.216043    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:39.536925    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:39.588789    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:39.716712    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:40.037381    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:40.088764    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:40.216139    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:40.537130    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:40.588756    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:40.716630    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:41.037028    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:41.088801    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:41.216584    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:41.536594    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:41.589199    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:41.716090    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:42.037075    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:42.088553    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:42.217951    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:42.537051    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:42.588495    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:42.716148    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:43.036984    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:43.088286    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:43.216303    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:43.536960    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:43.587404    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:43.716152    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:44.036220    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:44.088294    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:44.216792    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:44.536771    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:44.588284    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:44.716594    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:45.036876    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:45.087344    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:45.215730    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:45.536656    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:45.588193    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:45.714623    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:46.037330    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:46.088123    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:46.217472    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:46.537018    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:46.588662    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:46.716358    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:47.036882    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:47.088229    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:47.216021    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:47.537115    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:47.588098    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:47.715863    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:48.036172    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:48.088356    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:48.215971    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:48.536897    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:48.588011    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:48.715846    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:49.037040    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:49.138354    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:49.216488    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:49.535453    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:49.588333    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:49.716318    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:50.036748    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:50.137636    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:50.237985    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:50.536607    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:50.587891    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:50.715694    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:51.036753    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:51.088015    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:51.215902    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:51.537710    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:51.588408    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:51.720150    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:52.038394    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:52.090047    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:52.218446    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:52.536749    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:52.587930    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:52.715746    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:53.037153    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:53.088810    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:53.216370    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:53.536978    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:53.587533    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:53.716947    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:54.037228    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:54.088122    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:54.215364    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:54.536878    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:54.588368    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:54.715854    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:55.036306    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:55.088004    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:55.216407    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:55.536368    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:55.587602    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:55.715699    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:56.036477    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:56.087500    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:56.215339    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:56.535663    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:56.587609    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:56.715449    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:57.036276    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:57.087497    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:57.215395    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:57.536386    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:57.587621    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:57.715346    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:58.036567    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:58.088156    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:58.215236    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:58.536317    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:58.587740    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:58.715158    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:59.036266    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:59.087572    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:59.215368    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:29:59.536740    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:29:59.587924    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:29:59.715123    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:00.036661    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:00.087387    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:00.214180    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:00.537050    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:00.585716    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:00.713770    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:01.036146    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:01.086091    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:01.215443    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:01.536277    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:01.587556    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:01.715921    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:02.034589    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:02.087442    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:02.215265    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:02.535312    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:02.587659    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:02.715532    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:03.036219    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:03.087096    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:03.215188    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:03.536134    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:03.587578    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:03.717360    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:04.038958    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:04.090383    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:04.219874    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:04.536337    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:04.587480    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:04.715641    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:05.037197    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:05.088836    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:05.216177    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:05.536810    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:05.587660    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:05.716738    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:06.036295    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:06.088186    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:06.215432    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:06.536225    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:06.587530    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:06.715289    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:07.035503    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:07.087616    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:07.215492    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:07.536267    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:07.587598    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:07.714100    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:08.036242    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:08.087569    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:08.216874    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:08.536223    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:08.587684    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:08.725063    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:09.036913    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 10:30:09.088994    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:09.218287    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:09.536325    1875 kapi.go:107] duration metric: took 39.504100375s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 10:30:09.587786    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:09.716138    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:10.087735    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:10.215140    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:10.587739    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:10.715413    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:11.088072    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:11.216733    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:11.587417    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:11.714898    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:12.087107    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:12.215127    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:12.586346    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:12.714277    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:13.088942    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:13.217334    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:13.588499    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:13.715517    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:14.085926    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:14.215099    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:14.587709    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:14.714966    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:15.087534    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:15.216651    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:15.587884    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:15.715646    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:16.090741    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:16.215998    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:16.587518    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:16.714982    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:17.088622    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:17.215246    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:17.587290    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:17.713088    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:18.087423    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:18.214627    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:18.587461    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:18.714788    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:19.088741    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:19.214906    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:19.587341    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:19.715609    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:20.090961    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:20.215606    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:20.587301    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:20.715169    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:21.087171    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:21.214581    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:21.586978    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:21.714980    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:22.087162    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:22.214790    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:22.589148    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:22.716999    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:23.090853    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:23.216912    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:23.587494    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:23.714562    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:24.087181    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:24.214875    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:24.587123    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:24.715292    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:25.087750    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:25.215302    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:25.587346    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:25.715664    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:26.087565    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:26.215056    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:26.586891    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:26.714704    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:27.087192    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:27.218449    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:27.586957    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:27.715137    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:28.086015    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:28.218152    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:28.588450    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:28.714611    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:29.087212    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:29.215346    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:29.586999    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:29.714417    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:30.087313    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:30.213973    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:30.587454    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:30.715076    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:31.085640    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:31.216159    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:31.586744    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:31.713669    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:32.086944    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:32.215309    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:32.586831    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:32.714533    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:33.089117    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:33.216066    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:33.586955    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:33.714646    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:34.085479    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:34.215808    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:34.591093    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:34.718925    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:35.086956    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:35.214959    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:35.588436    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:35.715497    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:36.086888    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:36.214756    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:36.586540    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:36.715670    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:37.088645    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:37.219030    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:37.586709    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:37.714960    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:38.086092    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:38.214511    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:38.586576    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:38.714553    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:39.086625    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:39.214334    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:39.586620    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:39.714756    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:40.086781    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:40.214226    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:40.586551    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:40.714081    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:41.087203    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:41.214830    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:41.586682    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:41.714386    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:42.086754    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:42.213038    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:42.586491    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:42.712180    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:43.086564    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:43.214032    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:43.586562    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:43.714242    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:44.086726    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:44.213967    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:44.587002    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:44.714280    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:45.086537    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:45.213927    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:45.586408    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:45.713662    1875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 10:30:46.086654    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:46.214264    1875 kapi.go:107] duration metric: took 1m17.504283583s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 10:30:46.588271    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:47.087120    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:47.586497    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:48.087269    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:48.585165    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:49.085567    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:49.585526    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:50.086888    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:50.586486    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:51.086604    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:51.587154    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:52.086963    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:52.586904    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:53.086227    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:53.586464    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:54.086684    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:54.585775    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:55.087159    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:55.586117    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:56.086013    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:56.585945    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:57.086286    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:57.324336    1875 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 10:30:57.324345    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:30:57.586177    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 10:30:57.825119    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:30:58.086036    1875 kapi.go:107] duration metric: took 1m29.004411875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 10:30:58.323898    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:30:58.825503    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:30:59.324410    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:30:59.824478    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:00.324818    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:00.829291    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:01.329376    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:01.830340    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:02.324777    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:02.831224    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:03.325978    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:03.829886    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:04.327184    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:04.827536    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:05.322910    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:05.827912    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:06.325320    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:06.829818    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:07.322343    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:07.825346    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:08.325631    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:08.828715    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:09.323674    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:09.823891    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:10.323928    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:10.830884    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:11.325898    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:11.830777    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:12.325590    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:12.829008    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:13.324835    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:13.829626    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:14.324891    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:14.824585    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:15.324034    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:15.825665    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:16.324133    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:16.824622    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:17.325567    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:17.823476    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:18.327790    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:18.824206    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:19.323511    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:19.823351    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:20.324764    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:20.824153    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:21.325734    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:21.825459    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:22.327779    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:22.829441    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:23.323676    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:23.828179    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:24.325309    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:24.829789    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:25.327528    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:25.829317    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:26.326308    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:26.827586    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:27.326782    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:27.825613    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:28.327361    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:28.827464    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:29.324560    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:29.830104    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:30.321654    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:30.825631    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:31.323302    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:31.828136    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:32.327964    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:32.822764    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:33.324099    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:33.824258    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:34.324978    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:34.825726    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:35.328536    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:35.829095    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:36.327760    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:36.827281    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:37.328001    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:37.824356    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:38.322938    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:38.821979    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:39.322780    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:39.823302    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:40.324991    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:40.822877    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:41.323384    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:41.822633    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:42.324317    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:42.824046    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:43.326612    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:43.824431    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:44.324091    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:44.824842    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:45.327629    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:45.823840    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:46.324438    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:46.824129    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:47.323017    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:47.823831    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:48.324382    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:48.825824    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:49.323633    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:49.822137    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:50.323996    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:50.828365    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:51.329282    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:51.827258    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:52.322906    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:52.828354    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:53.326159    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:53.825431    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:54.323463    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:54.824326    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:55.323389    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:55.823439    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:56.323461    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:56.827974    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:57.325961    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:57.828622    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:58.325268    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:58.823992    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:59.326116    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:31:59.825741    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:00.324590    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:00.829686    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:01.324736    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:01.824765    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:02.324987    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:02.829128    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:03.326512    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:03.826832    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:04.327486    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:04.828426    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:05.328693    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:05.829131    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:06.329586    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:06.831584    1875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 10:32:07.330571    1875 kapi.go:107] duration metric: took 2m32.003648208s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 10:32:07.334839    1875 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-592000 cluster.
	I0910 10:32:07.339704    1875 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 10:32:07.343750    1875 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 10:32:07.346769    1875 out.go:177] * Enabled addons: metrics-server, inspektor-gadget, storage-provisioner-rancher, ingress-dns, storage-provisioner, default-storageclass, volcano, nvidia-device-plugin, cloud-spanner, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0910 10:32:07.350752    1875 addons.go:510] duration metric: took 2m42.104464958s for enable addons: enabled=[metrics-server inspektor-gadget storage-provisioner-rancher ingress-dns storage-provisioner default-storageclass volcano nvidia-device-plugin cloud-spanner yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0910 10:32:07.350768    1875 start.go:246] waiting for cluster config update ...
	I0910 10:32:07.350789    1875 start.go:255] writing updated cluster config ...
	I0910 10:32:07.351250    1875 ssh_runner.go:195] Run: rm -f paused
	I0910 10:32:07.500189    1875 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0910 10:32:07.503729    1875 out.go:201] 
	W0910 10:32:07.506954    1875 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0910 10:32:07.510898    1875 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0910 10:32:07.518874    1875 out.go:177] * Done! kubectl is now configured to use "addons-592000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 17:41:50 addons-592000 dockerd[1272]: time="2024-09-10T17:41:50.488290313Z" level=info msg="shim disconnected" id=4effb0a8f037aac210692c55db17317e42d82f67cca0e4baf3cb442fb232eb34 namespace=moby
	Sep 10 17:41:50 addons-592000 dockerd[1265]: time="2024-09-10T17:41:50.488518053Z" level=info msg="ignoring event" container=4effb0a8f037aac210692c55db17317e42d82f67cca0e4baf3cb442fb232eb34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:50 addons-592000 dockerd[1272]: time="2024-09-10T17:41:50.488687058Z" level=warning msg="cleaning up after shim disconnected" id=4effb0a8f037aac210692c55db17317e42d82f67cca0e4baf3cb442fb232eb34 namespace=moby
	Sep 10 17:41:50 addons-592000 dockerd[1272]: time="2024-09-10T17:41:50.488734187Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1265]: time="2024-09-10T17:41:56.351528028Z" level=info msg="ignoring event" container=3f4ec5c06c36a5bd85137393573b09d9fea639ef5ca4f81db2336381640bbbf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.351974926Z" level=info msg="shim disconnected" id=3f4ec5c06c36a5bd85137393573b09d9fea639ef5ca4f81db2336381640bbbf2 namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.352007712Z" level=warning msg="cleaning up after shim disconnected" id=3f4ec5c06c36a5bd85137393573b09d9fea639ef5ca4f81db2336381640bbbf2 namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.352012164Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1265]: time="2024-09-10T17:41:56.492025920Z" level=info msg="ignoring event" container=e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.492252551Z" level=info msg="shim disconnected" id=e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.492281967Z" level=warning msg="cleaning up after shim disconnected" id=e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.492286086Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.509243247Z" level=warning msg="cleanup warnings time=\"2024-09-10T17:41:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.531074262Z" level=info msg="shim disconnected" id=3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.531219012Z" level=warning msg="cleaning up after shim disconnected" id=3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.531238192Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1265]: time="2024-09-10T17:41:56.532040620Z" level=info msg="ignoring event" container=3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.611074829Z" level=info msg="shim disconnected" id=4c72403f95e88aa0b1b15b26ae786b6573f14197928e0e41b24861236b417067 namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.611104079Z" level=warning msg="cleaning up after shim disconnected" id=4c72403f95e88aa0b1b15b26ae786b6573f14197928e0e41b24861236b417067 namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.611107823Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1265]: time="2024-09-10T17:41:56.611268925Z" level=info msg="ignoring event" container=4c72403f95e88aa0b1b15b26ae786b6573f14197928e0e41b24861236b417067 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.657876920Z" level=info msg="shim disconnected" id=e41c5cc4827b75decaa1062f46381f59994310a7fd0b8527e6be4ef7f3f26d19 namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.657966957Z" level=warning msg="cleaning up after shim disconnected" id=e41c5cc4827b75decaa1062f46381f59994310a7fd0b8527e6be4ef7f3f26d19 namespace=moby
	Sep 10 17:41:56 addons-592000 dockerd[1265]: time="2024-09-10T17:41:56.658034027Z" level=info msg="ignoring event" container=e41c5cc4827b75decaa1062f46381f59994310a7fd0b8527e6be4ef7f3f26d19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:56 addons-592000 dockerd[1272]: time="2024-09-10T17:41:56.657989217Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	1025a837f40e5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            32 seconds ago      Exited              gadget                     7                   cd74953440164       gadget-vbxln
	5e70ab3ace129       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   c4f02c8bf30c2       gcp-auth-89d5ffd79-mgkkt
	145b79b803fe6       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   79c5b1cd36ab8       ingress-nginx-controller-bc57996ff-krrrr
	67eacd31859e1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   dd5f21785c479       ingress-nginx-admission-patch-kq245
	d79e1f3b06002       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   cb9ddfbfc25fc       ingress-nginx-admission-create-qcnqj
	bae295e5fcd62       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        11 minutes ago      Running             yakd                       0                   8c07cfa90e1ff       yakd-dashboard-67d98fc6b-ln7xx
	3ae2cc2fd9e7b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy             0                   e41c5cc4827b7       registry-proxy-g9hh4
	dbe01519331e1       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               11 minutes ago      Running             cloud-spanner-emulator     0                   5648b7f49289e       cloud-spanner-emulator-769b77f747-85njj
	e7cff5d22c181       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   4c72403f95e88       registry-66c9cd494c-qb2rh
	614ca294c4f39       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   2d9028815317c       nvidia-device-plugin-daemonset-pzndx
	91cebd51ab75d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   c11fb68e364b0       kube-ingress-dns-minikube
	e454db0c77a16       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   7af242efba3e0       local-path-provisioner-86d989889c-h5ctd
	0b4fb9e93b88a       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server             0                   f1485cfeb2d51       metrics-server-84c5f94fbc-sb6ns
	cf29947c74eba       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   637e2c6010b95       storage-provisioner
	3c3197d5b12a8       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                    0                   877c4665be2be       coredns-6f6b679f8f-7gqz8
	ea1510481c9df       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                 0                   e63cccc2599f6       kube-proxy-nsw7h
	c9fc76460a920       cd0f0ae0ec9e0                                                                                                                12 minutes ago      Running             kube-apiserver             0                   ae7032a0ba5ab       kube-apiserver-addons-592000
	cdda8889658ad       fbbbd428abb4d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   9c1a6c4810ee9       kube-scheduler-addons-592000
	a5b5af720e175       fcb0683e6bdbd                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   1d3dbfdac85c5       kube-controller-manager-addons-592000
	0b74e0fd856e2       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   d8489de09676c       etcd-addons-592000
	
	
	==> controller_ingress [145b79b803fe] <==
	W0910 17:30:45.758058       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0910 17:30:45.758139       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0910 17:30:45.761184       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/arm64"
	I0910 17:30:45.826431       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0910 17:30:45.832508       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0910 17:30:45.835869       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0910 17:30:45.840405       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"2921581d-ce09-4cd8-b6e1-b837f46c7b4a", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0910 17:30:45.841597       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"2e125521-7ef7-4949-98f6-7a8023722202", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0910 17:30:45.841617       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"b04b738c-a1ae-41e3-b182-089216a0367c", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0910 17:30:47.038130       7 nginx.go:317] "Starting NGINX process"
	I0910 17:30:47.038314       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0910 17:30:47.038810       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0910 17:30:47.038947       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0910 17:30:47.056404       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0910 17:30:47.056711       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-krrrr"
	I0910 17:30:47.060301       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-krrrr" node="addons-592000"
	I0910 17:30:47.070446       7 controller.go:213] "Backend successfully reloaded"
	I0910 17:30:47.070491       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0910 17:30:47.070716       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-krrrr", UID:"ed57db95-441b-4e42-96ed-3d80191de858", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [3c3197d5b12a] <==
	[INFO] 127.0.0.1:37424 - 43531 "HINFO IN 2577111125036051479.1742811790643504515. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009847409s
	[INFO] 10.244.0.8:43610 - 16406 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000110996s
	[INFO] 10.244.0.8:43610 - 19216 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163868s
	[INFO] 10.244.0.8:60568 - 62559 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050995s
	[INFO] 10.244.0.8:60568 - 4702 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000024225s
	[INFO] 10.244.0.8:46727 - 17874 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029521s
	[INFO] 10.244.0.8:46727 - 20701 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027395s
	[INFO] 10.244.0.8:53653 - 54471 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000028437s
	[INFO] 10.244.0.8:53653 - 9670 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00006071s
	[INFO] 10.244.0.8:60168 - 63227 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000027019s
	[INFO] 10.244.0.8:60168 - 31993 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000015553s
	[INFO] 10.244.0.8:57605 - 25052 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000018388s
	[INFO] 10.244.0.8:57605 - 13023 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00001622s
	[INFO] 10.244.0.8:60338 - 18560 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000011716s
	[INFO] 10.244.0.8:60338 - 7809 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000009798s
	[INFO] 10.244.0.8:40343 - 51330 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000011926s
	[INFO] 10.244.0.8:40343 - 4227 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000054748s
	[INFO] 10.244.0.24:57896 - 10667 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004303233s
	[INFO] 10.244.0.24:49038 - 53726 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004414916s
	[INFO] 10.244.0.24:60288 - 14242 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00023995s
	[INFO] 10.244.0.24:48843 - 47020 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000209697s
	[INFO] 10.244.0.24:49156 - 43868 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125601s
	[INFO] 10.244.0.24:45638 - 34258 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104765s
	[INFO] 10.244.0.24:39854 - 355 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001413575s
	[INFO] 10.244.0.24:56931 - 49209 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001474416s
	
	
	==> describe nodes <==
	Name:               addons-592000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-592000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-592000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T10_29_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-592000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:29:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-592000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:41:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:38:00 +0000   Tue, 10 Sep 2024 17:29:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:38:00 +0000   Tue, 10 Sep 2024 17:29:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:38:00 +0000   Tue, 10 Sep 2024 17:29:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:38:00 +0000   Tue, 10 Sep 2024 17:29:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-592000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 795f27a08ae946a8a5e5544ce603e24f
	  System UUID:                795f27a08ae946a8a5e5544ce603e24f
	  Boot ID:                    b84756e6-5c35-41b8-825a-7ef612569e39
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  default                     cloud-spanner-emulator-769b77f747-85njj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gadget                      gadget-vbxln                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-mgkkt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-krrrr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-7gqz8                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-592000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-592000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-592000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nsw7h                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-592000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-sb6ns             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-pzndx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-h5ctd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ln7xx              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node addons-592000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node addons-592000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node addons-592000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node addons-592000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-592000 event: Registered Node addons-592000 in Controller
	
	
	==> dmesg <==
	[  +0.044203] kauditd_printk_skb: 64 callbacks suppressed
	[  +4.965012] kauditd_printk_skb: 293 callbacks suppressed
	[  +4.966445] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.884647] kauditd_printk_skb: 11 callbacks suppressed
	[Sep10 17:30] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.271072] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.886059] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.185943] kauditd_printk_skb: 13 callbacks suppressed
	[ +13.020087] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.244091] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.608745] kauditd_printk_skb: 12 callbacks suppressed
	[Sep10 17:31] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:32] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.779221] kauditd_printk_skb: 19 callbacks suppressed
	[ +27.853851] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.193374] kauditd_printk_skb: 20 callbacks suppressed
	[Sep10 17:33] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:36] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:40] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:41] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.890549] kauditd_printk_skb: 7 callbacks suppressed
	[ +20.254907] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.788308] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.738598] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.439379] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [0b74e0fd856e] <==
	{"level":"info","ts":"2024-09-10T17:29:16.494795Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-10T17:29:16.496412Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-10T17:29:16.562039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-10T17:29:16.562111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-10T17:29:16.562178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-10T17:29:16.562200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:16.562219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:16.562254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:16.562275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:16.570054Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:16.570256Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-592000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T17:29:16.570283Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:29:16.570332Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:29:16.574363Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:29:16.574869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-10T17:29:16.575168Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:29:16.575614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T17:29:16.578273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:16.578317Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:16.578350Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:16.582028Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T17:29:16.582040Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T17:39:17.045852Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1815}
	{"level":"info","ts":"2024-09-10T17:39:17.139999Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1815,"took":"90.458513ms","hash":2158818084,"current-db-size-bytes":8851456,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4648960,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-10T17:39:17.140029Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2158818084,"revision":1815,"compact-revision":-1}
	
	
	==> gcp-auth [5e70ab3ace12] <==
	2024/09/10 17:32:06 GCP Auth Webhook started!
	2024/09/10 17:32:22 Ready to marshal response ...
	2024/09/10 17:32:22 Ready to write response ...
	2024/09/10 17:32:22 Ready to marshal response ...
	2024/09/10 17:32:22 Ready to write response ...
	2024/09/10 17:32:44 Ready to marshal response ...
	2024/09/10 17:32:44 Ready to write response ...
	2024/09/10 17:32:45 Ready to marshal response ...
	2024/09/10 17:32:45 Ready to write response ...
	2024/09/10 17:32:45 Ready to marshal response ...
	2024/09/10 17:32:45 Ready to write response ...
	2024/09/10 17:40:56 Ready to marshal response ...
	2024/09/10 17:40:56 Ready to write response ...
	2024/09/10 17:41:05 Ready to marshal response ...
	2024/09/10 17:41:05 Ready to write response ...
	2024/09/10 17:41:33 Ready to marshal response ...
	2024/09/10 17:41:33 Ready to write response ...
	
	
	==> kernel <==
	 17:41:57 up 12 min,  0 users,  load average: 0.54, 0.58, 0.45
	Linux addons-592000 5.10.207 #1 SMP PREEMPT Mon Sep 9 22:12:33 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9fc76460a92] <==
	I0910 17:32:35.471781       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0910 17:32:35.488282       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:35.599885       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:35.730275       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0910 17:32:35.732507       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:35.763851       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0910 17:32:36.492916       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0910 17:32:36.509348       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0910 17:32:36.733684       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0910 17:32:36.734252       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0910 17:32:36.763139       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0910 17:32:36.764690       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0910 17:32:36.905829       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0910 17:41:12.536330       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0910 17:41:50.288423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:41:50.288454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:41:50.298632       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:41:50.298648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:41:50.314928       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:41:50.314951       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:41:50.402017       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:41:50.402030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0910 17:41:51.315597       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0910 17:41:51.402720       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0910 17:41:51.435363       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [a5b5af720e17] <==
	I0910 17:41:44.351828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-592000"
	I0910 17:41:50.334464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="3.203µs"
	E0910 17:41:51.317460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0910 17:41:51.403507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0910 17:41:51.435992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:52.343877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:52.343989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:52.555962       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:52.556079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:52.726834       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:52.726942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:53.944601       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:53.944846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:54.281777       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:54.281810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:41:54.937575       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0910 17:41:54.937653       1 shared_informer.go:320] Caches are synced for resource quota
	W0910 17:41:55.008312       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:55.008627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:55.201168       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:55.201239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:41:55.279008       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0910 17:41:55.279218       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 17:41:55.775933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="2.705µs"
	I0910 17:41:56.473557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.332µs"
	
	
	==> kube-proxy [ea1510481c9d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:29:26.222111       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:29:26.246572       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0910 17:29:26.246733       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:29:26.324058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:29:26.324085       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:29:26.324101       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:29:26.324774       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:29:26.324897       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:29:26.324908       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:29:26.325762       1 config.go:197] "Starting service config controller"
	I0910 17:29:26.325896       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:29:26.325906       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:29:26.325908       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:29:26.326139       1 config.go:326] "Starting node config controller"
	I0910 17:29:26.326141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:29:26.426096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 17:29:26.426126       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:29:26.426365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cdda8889658a] <==
	W0910 17:29:17.705879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 17:29:17.705891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:17.705911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 17:29:17.705920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:17.706069       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 17:29:17.706118       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 17:29:17.706212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:29:17.706234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:17.706617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 17:29:17.706648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:18.531017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 17:29:18.531106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:18.559407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:29:18.559523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:18.575911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 17:29:18.575958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:18.673370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 17:29:18.673646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:18.723070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 17:29:18.723256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:29:18.762834       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 17:29:18.762914       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 17:29:18.788689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:29:18.788723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0910 17:29:21.401599       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.552372    2026 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec56769-bfd7-46da-b810-095f42ed9453-kube-api-access-xn6pf" (OuterVolumeSpecName: "kube-api-access-xn6pf") pod "6ec56769-bfd7-46da-b810-095f42ed9453" (UID: "6ec56769-bfd7-46da-b810-095f42ed9453"). InnerVolumeSpecName "kube-api-access-xn6pf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.651921    2026 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xn6pf\" (UniqueName: \"kubernetes.io/projected/6ec56769-bfd7-46da-b810-095f42ed9453-kube-api-access-xn6pf\") on node \"addons-592000\" DevicePath \"\""
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.651940    2026 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6ec56769-bfd7-46da-b810-095f42ed9453-gcp-creds\") on node \"addons-592000\" DevicePath \"\""
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.752216    2026 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx92c\" (UniqueName: \"kubernetes.io/projected/d1e5edb1-7803-4933-a00b-4e3f52088cd3-kube-api-access-xx92c\") pod \"d1e5edb1-7803-4933-a00b-4e3f52088cd3\" (UID: \"d1e5edb1-7803-4933-a00b-4e3f52088cd3\") "
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.753683    2026 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e5edb1-7803-4933-a00b-4e3f52088cd3-kube-api-access-xx92c" (OuterVolumeSpecName: "kube-api-access-xx92c") pod "d1e5edb1-7803-4933-a00b-4e3f52088cd3" (UID: "d1e5edb1-7803-4933-a00b-4e3f52088cd3"). InnerVolumeSpecName "kube-api-access-xx92c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.853763    2026 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l627s\" (UniqueName: \"kubernetes.io/projected/ba9d217c-a23d-45ff-985a-a5b541ecc35a-kube-api-access-l627s\") pod \"ba9d217c-a23d-45ff-985a-a5b541ecc35a\" (UID: \"ba9d217c-a23d-45ff-985a-a5b541ecc35a\") "
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.853946    2026 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xx92c\" (UniqueName: \"kubernetes.io/projected/d1e5edb1-7803-4933-a00b-4e3f52088cd3-kube-api-access-xx92c\") on node \"addons-592000\" DevicePath \"\""
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.855144    2026 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba9d217c-a23d-45ff-985a-a5b541ecc35a-kube-api-access-l627s" (OuterVolumeSpecName: "kube-api-access-l627s") pod "ba9d217c-a23d-45ff-985a-a5b541ecc35a" (UID: "ba9d217c-a23d-45ff-985a-a5b541ecc35a"). InnerVolumeSpecName "kube-api-access-l627s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:41:56 addons-592000 kubelet[2026]: I0910 17:41:56.954261    2026 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l627s\" (UniqueName: \"kubernetes.io/projected/ba9d217c-a23d-45ff-985a-a5b541ecc35a-kube-api-access-l627s\") on node \"addons-592000\" DevicePath \"\""
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.076400    2026 scope.go:117] "RemoveContainer" containerID="0b4fb9e93b88a8d04069f672c189dcf0fe177f92d345f57cb09a1f534076df55"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.087524    2026 scope.go:117] "RemoveContainer" containerID="0b4fb9e93b88a8d04069f672c189dcf0fe177f92d345f57cb09a1f534076df55"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: E0910 17:41:57.088606    2026 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0b4fb9e93b88a8d04069f672c189dcf0fe177f92d345f57cb09a1f534076df55" containerID="0b4fb9e93b88a8d04069f672c189dcf0fe177f92d345f57cb09a1f534076df55"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.088621    2026 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0b4fb9e93b88a8d04069f672c189dcf0fe177f92d345f57cb09a1f534076df55"} err="failed to get container status \"0b4fb9e93b88a8d04069f672c189dcf0fe177f92d345f57cb09a1f534076df55\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0b4fb9e93b88a8d04069f672c189dcf0fe177f92d345f57cb09a1f534076df55"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.088634    2026 scope.go:117] "RemoveContainer" containerID="3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.109462    2026 scope.go:117] "RemoveContainer" containerID="3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: E0910 17:41:57.110330    2026 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf" containerID="3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.110371    2026 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf"} err="failed to get container status \"3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3ae2cc2fd9e7be05b8e65014593462af188e7e102273963d53354d5da59ebddf"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.110388    2026 scope.go:117] "RemoveContainer" containerID="e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.129231    2026 scope.go:117] "RemoveContainer" containerID="e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: E0910 17:41:57.129579    2026 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f" containerID="e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.129598    2026 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f"} err="failed to get container status \"e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f\": rpc error: code = Unknown desc = Error response from daemon: No such container: e7cff5d22c181ce0af24e9261766eed30400076c102a2434c7bb8fc2bc880e5f"
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.155689    2026 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6ef6de4d-79f9-4779-971b-4671e55ffe5a-tmp-dir\") pod \"6ef6de4d-79f9-4779-971b-4671e55ffe5a\" (UID: \"6ef6de4d-79f9-4779-971b-4671e55ffe5a\") "
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.155709    2026 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54jqb\" (UniqueName: \"kubernetes.io/projected/6ef6de4d-79f9-4779-971b-4671e55ffe5a-kube-api-access-54jqb\") pod \"6ef6de4d-79f9-4779-971b-4671e55ffe5a\" (UID: \"6ef6de4d-79f9-4779-971b-4671e55ffe5a\") "
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.157107    2026 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ef6de4d-79f9-4779-971b-4671e55ffe5a-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6ef6de4d-79f9-4779-971b-4671e55ffe5a" (UID: "6ef6de4d-79f9-4779-971b-4671e55ffe5a"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 10 17:41:57 addons-592000 kubelet[2026]: I0910 17:41:57.157776    2026 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef6de4d-79f9-4779-971b-4671e55ffe5a-kube-api-access-54jqb" (OuterVolumeSpecName: "kube-api-access-54jqb") pod "6ef6de4d-79f9-4779-971b-4671e55ffe5a" (UID: "6ef6de4d-79f9-4779-971b-4671e55ffe5a"). InnerVolumeSpecName "kube-api-access-54jqb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [cf29947c74eb] <==
	I0910 17:29:27.639423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:29:27.741526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:29:27.741570       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:29:27.797884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:29:27.797983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-592000_e7c1ccd2-eef9-4189-bf52-2a318b053220!
	I0910 17:29:27.799044       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3474a80-8f9f-4102-af60-47b2a3c17674", APIVersion:"v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-592000_e7c1ccd2-eef9-4189-bf52-2a318b053220 became leader
	I0910 17:29:27.898751       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-592000_e7c1ccd2-eef9-4189-bf52-2a318b053220!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-592000 -n addons-592000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-592000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-qcnqj ingress-nginx-admission-patch-kq245
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-592000 describe pod busybox ingress-nginx-admission-create-qcnqj ingress-nginx-admission-patch-kq245
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-592000 describe pod busybox ingress-nginx-admission-create-qcnqj ingress-nginx-admission-patch-kq245: exit status 1 (41.430334ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-592000/192.168.105.2
	Start Time:       Tue, 10 Sep 2024 10:32:45 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nsn45 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nsn45:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-592000
	  Normal   Pulling    7m42s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m11s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m10s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x20 over 9m10s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qcnqj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kq245" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-592000 describe pod busybox ingress-nginx-admission-create-qcnqj ingress-nginx-admission-patch-kq245: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.35s)

                                                
                                    
x
+
TestCertOptions (12.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-070000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-070000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.990067875s)

                                                
                                                
-- stdout --
	* [cert-options-070000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-070000" primary control-plane node in "cert-options-070000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-070000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-070000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-070000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.005542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-070000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-070000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-070000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-070000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-070000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-070000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.815958ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-070000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-070000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-070000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-070000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-070000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-10 11:07:03.539088 -0700 PDT m=+2323.002132501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-070000 -n cert-options-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-070000 -n cert-options-070000: exit status 7 (31.4025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-070000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-070000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-070000
--- FAIL: TestCertOptions (12.26s)

                                                
                                    
x
+
TestCertExpiration (197.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-717000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-717000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.336437791s)

                                                
                                                
-- stdout --
	* [cert-expiration-717000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-717000" primary control-plane node in "cert-expiration-717000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-717000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-717000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-717000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-717000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.217871666s)

                                                
                                                
-- stdout --
	* [cert-expiration-717000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-717000" primary control-plane node in "cert-expiration-717000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-717000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-717000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-717000" primary control-plane node in "cert-expiration-717000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-10 11:10:06.209611 -0700 PDT m=+2505.677502835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-717000 -n cert-expiration-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-717000 -n cert-expiration-717000: exit status 7 (63.000708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-717000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-717000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-717000
--- FAIL: TestCertExpiration (197.70s)

                                                
                                    
x
+
TestDockerFlags (12.91s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-081000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-081000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.510515667s)

                                                
                                                
-- stdout --
	* [docker-flags-081000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-081000" primary control-plane node in "docker-flags-081000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-081000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:06:38.517578    5109 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:06:38.517715    5109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:38.517719    5109 out.go:358] Setting ErrFile to fd 2...
	I0910 11:06:38.517721    5109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:38.517862    5109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:06:38.519043    5109 out.go:352] Setting JSON to false
	I0910 11:06:38.536787    5109 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3962,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:06:38.536869    5109 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:06:38.550123    5109 out.go:177] * [docker-flags-081000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:06:38.559102    5109 notify.go:220] Checking for updates...
	I0910 11:06:38.565052    5109 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:06:38.576759    5109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:06:38.586007    5109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:06:38.592001    5109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:06:38.598940    5109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:06:38.612759    5109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:06:38.621640    5109 config.go:182] Loaded profile config "force-systemd-flag-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:06:38.621728    5109 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:06:38.621794    5109 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:06:38.625008    5109 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:06:38.632007    5109 start.go:297] selected driver: qemu2
	I0910 11:06:38.632014    5109 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:06:38.632020    5109 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:06:38.634847    5109 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:06:38.647953    5109 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:06:38.653144    5109 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0910 11:06:38.653181    5109 cni.go:84] Creating CNI manager for ""
	I0910 11:06:38.653190    5109 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:06:38.653196    5109 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:06:38.653249    5109 start.go:340] cluster config:
	{Name:docker-flags-081000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-081000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:06:38.658091    5109 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:06:38.673093    5109 out.go:177] * Starting "docker-flags-081000" primary control-plane node in "docker-flags-081000" cluster
	I0910 11:06:38.675204    5109 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:06:38.675231    5109 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:06:38.675239    5109 cache.go:56] Caching tarball of preloaded images
	I0910 11:06:38.675332    5109 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:06:38.675339    5109 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:06:38.675417    5109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/docker-flags-081000/config.json ...
	I0910 11:06:38.675430    5109 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/docker-flags-081000/config.json: {Name:mk33ca806f63455a657e02070b2d8649103c2c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:06:38.676075    5109 start.go:360] acquireMachinesLock for docker-flags-081000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:40.843805    5109 start.go:364] duration metric: took 2.167744875s to acquireMachinesLock for "docker-flags-081000"
	I0910 11:06:40.843967    5109 start.go:93] Provisioning new machine with config: &{Name:docker-flags-081000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-081000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:40.844162    5109 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:40.855629    5109 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:40.905202    5109 start.go:159] libmachine.API.Create for "docker-flags-081000" (driver="qemu2")
	I0910 11:06:40.905253    5109 client.go:168] LocalClient.Create starting
	I0910 11:06:40.905410    5109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:40.905465    5109 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:40.905484    5109 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:40.905544    5109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:40.905588    5109 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:40.905600    5109 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:40.906210    5109 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:41.257279    5109 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:41.304789    5109 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:41.304794    5109 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:41.305023    5109 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2
	I0910 11:06:41.314082    5109 main.go:141] libmachine: STDOUT: 
	I0910 11:06:41.314102    5109 main.go:141] libmachine: STDERR: 
	I0910 11:06:41.314149    5109 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2 +20000M
	I0910 11:06:41.321953    5109 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:41.321968    5109 main.go:141] libmachine: STDERR: 
	I0910 11:06:41.321986    5109 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2
	I0910 11:06:41.321991    5109 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:41.322003    5109 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:41.322038    5109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:29:86:f4:48:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2
	I0910 11:06:41.323602    5109 main.go:141] libmachine: STDOUT: 
	I0910 11:06:41.323617    5109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:41.323637    5109 client.go:171] duration metric: took 418.389125ms to LocalClient.Create
	I0910 11:06:43.325771    5109 start.go:128] duration metric: took 2.481643833s to createHost
	I0910 11:06:43.325918    5109 start.go:83] releasing machines lock for "docker-flags-081000", held for 2.4820545s
	W0910 11:06:43.326012    5109 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:43.342270    5109 out.go:177] * Deleting "docker-flags-081000" in qemu2 ...
	W0910 11:06:43.386101    5109 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:43.386139    5109 start.go:729] Will try again in 5 seconds ...
	I0910 11:06:48.388185    5109 start.go:360] acquireMachinesLock for docker-flags-081000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:48.388364    5109 start.go:364] duration metric: took 142.833µs to acquireMachinesLock for "docker-flags-081000"
	I0910 11:06:48.388396    5109 start.go:93] Provisioning new machine with config: &{Name:docker-flags-081000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-081000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:48.388491    5109 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:48.395763    5109 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:48.422041    5109 start.go:159] libmachine.API.Create for "docker-flags-081000" (driver="qemu2")
	I0910 11:06:48.422076    5109 client.go:168] LocalClient.Create starting
	I0910 11:06:48.422144    5109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:48.422176    5109 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:48.422191    5109 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:48.422229    5109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:48.422249    5109 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:48.422261    5109 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:48.425468    5109 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:48.871397    5109 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:48.933756    5109 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:48.933762    5109 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:48.933938    5109 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2
	I0910 11:06:48.942922    5109 main.go:141] libmachine: STDOUT: 
	I0910 11:06:48.942940    5109 main.go:141] libmachine: STDERR: 
	I0910 11:06:48.942996    5109 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2 +20000M
	I0910 11:06:48.950847    5109 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:48.950861    5109 main.go:141] libmachine: STDERR: 
	I0910 11:06:48.950878    5109 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2
	I0910 11:06:48.950887    5109 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:48.950896    5109 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:48.950933    5109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:db:0b:dd:3f:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/docker-flags-081000/disk.qcow2
	I0910 11:06:48.952525    5109 main.go:141] libmachine: STDOUT: 
	I0910 11:06:48.952543    5109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:48.952556    5109 client.go:171] duration metric: took 530.489917ms to LocalClient.Create
	I0910 11:06:50.954730    5109 start.go:128] duration metric: took 2.56627075s to createHost
	I0910 11:06:50.954817    5109 start.go:83] releasing machines lock for "docker-flags-081000", held for 2.566505083s
	W0910 11:06:50.955271    5109 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-081000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-081000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:50.972805    5109 out.go:201] 
	W0910 11:06:50.977077    5109 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:06:50.977118    5109 out.go:270] * 
	* 
	W0910 11:06:50.979079    5109 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:06:50.987893    5109 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-081000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-081000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-081000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (94.3255ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-081000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-081000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-081000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-081000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-081000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-081000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-081000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-081000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-081000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (89.766833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-081000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-081000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-081000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-081000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-081000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-081000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-09-10 11:06:51.185047 -0700 PDT m=+2310.647763376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-081000 -n docker-flags-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-081000 -n docker-flags-081000: exit status 7 (33.3975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-081000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-081000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-081000
--- FAIL: TestDockerFlags (12.91s)

                                                
                                    
x
+
TestForceSystemdFlag (12.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-278000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-278000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.626624042s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-278000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-278000" primary control-plane node in "force-systemd-flag-278000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-278000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:06:35.758005    5088 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:06:35.758157    5088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:35.758160    5088 out.go:358] Setting ErrFile to fd 2...
	I0910 11:06:35.758162    5088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:35.758288    5088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:06:35.759536    5088 out.go:352] Setting JSON to false
	I0910 11:06:35.778871    5088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3959,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:06:35.778965    5088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:06:35.842870    5088 out.go:177] * [force-systemd-flag-278000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:06:35.851779    5088 notify.go:220] Checking for updates...
	I0910 11:06:35.857827    5088 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:06:35.864567    5088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:06:35.877685    5088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:06:35.884790    5088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:06:35.892795    5088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:06:35.899692    5088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:06:35.904448    5088 config.go:182] Loaded profile config "force-systemd-env-177000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:06:35.904562    5088 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:06:35.904655    5088 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:06:35.919738    5088 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:06:35.929737    5088 start.go:297] selected driver: qemu2
	I0910 11:06:35.929748    5088 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:06:35.929761    5088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:06:35.933752    5088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:06:35.944743    5088 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:06:35.947975    5088 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 11:06:35.948034    5088 cni.go:84] Creating CNI manager for ""
	I0910 11:06:35.948047    5088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:06:35.948055    5088 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:06:35.948113    5088 start.go:340] cluster config:
	{Name:force-systemd-flag-278000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:06:35.954479    5088 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:06:35.969838    5088 out.go:177] * Starting "force-systemd-flag-278000" primary control-plane node in "force-systemd-flag-278000" cluster
	I0910 11:06:35.973787    5088 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:06:35.973819    5088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:06:35.973835    5088 cache.go:56] Caching tarball of preloaded images
	I0910 11:06:35.973969    5088 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:06:35.973984    5088 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:06:35.974097    5088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/force-systemd-flag-278000/config.json ...
	I0910 11:06:35.974118    5088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/force-systemd-flag-278000/config.json: {Name:mk3b20719560a5e121cd4107ed0869ab1f295b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:06:35.974671    5088 start.go:360] acquireMachinesLock for force-systemd-flag-278000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:38.126275    5088 start.go:364] duration metric: took 2.151614791s to acquireMachinesLock for "force-systemd-flag-278000"
	I0910 11:06:38.126485    5088 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:38.126669    5088 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:38.140036    5088 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:38.191795    5088 start.go:159] libmachine.API.Create for "force-systemd-flag-278000" (driver="qemu2")
	I0910 11:06:38.191850    5088 client.go:168] LocalClient.Create starting
	I0910 11:06:38.191962    5088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:38.192021    5088 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:38.192036    5088 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:38.192121    5088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:38.192165    5088 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:38.192180    5088 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:38.192818    5088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:38.730133    5088 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:38.823618    5088 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:38.823623    5088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:38.823811    5088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2
	I0910 11:06:38.833071    5088 main.go:141] libmachine: STDOUT: 
	I0910 11:06:38.833090    5088 main.go:141] libmachine: STDERR: 
	I0910 11:06:38.833149    5088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2 +20000M
	I0910 11:06:38.840922    5088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:38.840935    5088 main.go:141] libmachine: STDERR: 
	I0910 11:06:38.840950    5088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2
	I0910 11:06:38.840953    5088 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:38.840966    5088 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:38.840995    5088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:4f:2b:04:d7:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2
	I0910 11:06:38.842591    5088 main.go:141] libmachine: STDOUT: 
	I0910 11:06:38.842618    5088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:38.842638    5088 client.go:171] duration metric: took 650.799416ms to LocalClient.Create
	I0910 11:06:40.843568    5088 start.go:128] duration metric: took 2.716944625s to createHost
	I0910 11:06:40.843649    5088 start.go:83] releasing machines lock for "force-systemd-flag-278000", held for 2.717402625s
	W0910 11:06:40.843711    5088 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:40.863601    5088 out.go:177] * Deleting "force-systemd-flag-278000" in qemu2 ...
	W0910 11:06:40.888149    5088 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:40.888166    5088 start.go:729] Will try again in 5 seconds ...
	I0910 11:06:45.890198    5088 start.go:360] acquireMachinesLock for force-systemd-flag-278000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:45.890748    5088 start.go:364] duration metric: took 424.875µs to acquireMachinesLock for "force-systemd-flag-278000"
	I0910 11:06:45.890926    5088 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:45.891253    5088 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:45.913761    5088 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:45.965157    5088 start.go:159] libmachine.API.Create for "force-systemd-flag-278000" (driver="qemu2")
	I0910 11:06:45.965209    5088 client.go:168] LocalClient.Create starting
	I0910 11:06:45.965316    5088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:45.965384    5088 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:45.965403    5088 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:45.965479    5088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:45.965523    5088 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:45.965535    5088 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:45.966056    5088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:46.138449    5088 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:46.284796    5088 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:46.284802    5088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:46.285038    5088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2
	I0910 11:06:46.294404    5088 main.go:141] libmachine: STDOUT: 
	I0910 11:06:46.294431    5088 main.go:141] libmachine: STDERR: 
	I0910 11:06:46.294477    5088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2 +20000M
	I0910 11:06:46.302184    5088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:46.302200    5088 main.go:141] libmachine: STDERR: 
	I0910 11:06:46.302220    5088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2
	I0910 11:06:46.302230    5088 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:46.302241    5088 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:46.302269    5088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:0c:2c:b3:a8:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-flag-278000/disk.qcow2
	I0910 11:06:46.303773    5088 main.go:141] libmachine: STDOUT: 
	I0910 11:06:46.303788    5088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:46.303800    5088 client.go:171] duration metric: took 338.596208ms to LocalClient.Create
	I0910 11:06:48.305926    5088 start.go:128] duration metric: took 2.414708459s to createHost
	I0910 11:06:48.305999    5088 start.go:83] releasing machines lock for "force-systemd-flag-278000", held for 2.415255042s
	W0910 11:06:48.306346    5088 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:48.322878    5088 out.go:201] 
	W0910 11:06:48.324539    5088 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:06:48.324565    5088 out.go:270] * 
	* 
	W0910 11:06:48.327238    5088 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:06:48.339854    5088 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-278000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-278000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-278000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.062666ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-278000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-278000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-278000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-10 11:06:48.437022 -0700 PDT m=+2307.899665668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-278000 -n force-systemd-flag-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-278000 -n force-systemd-flag-278000: exit status 7 (37.839583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-278000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-278000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-278000
--- FAIL: TestForceSystemdFlag (12.93s)

                                                
                                    
x
+
TestForceSystemdEnv (10.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-177000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-177000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.069717042s)

                                                
                                                
-- stdout --
	* [force-systemd-env-177000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-177000" primary control-plane node in "force-systemd-env-177000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-177000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:06:28.132045    5050 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:06:28.132166    5050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:28.132169    5050 out.go:358] Setting ErrFile to fd 2...
	I0910 11:06:28.132171    5050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:06:28.132288    5050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:06:28.133413    5050 out.go:352] Setting JSON to false
	I0910 11:06:28.149453    5050 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3952,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:06:28.149535    5050 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:06:28.156260    5050 out.go:177] * [force-systemd-env-177000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:06:28.165143    5050 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:06:28.165167    5050 notify.go:220] Checking for updates...
	I0910 11:06:28.172098    5050 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:06:28.175141    5050 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:06:28.178118    5050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:06:28.181059    5050 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:06:28.184144    5050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0910 11:06:28.187489    5050 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:06:28.187540    5050 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:06:28.192118    5050 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:06:28.199158    5050 start.go:297] selected driver: qemu2
	I0910 11:06:28.199163    5050 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:06:28.199168    5050 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:06:28.201371    5050 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:06:28.204133    5050 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:06:28.207221    5050 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 11:06:28.207259    5050 cni.go:84] Creating CNI manager for ""
	I0910 11:06:28.207268    5050 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:06:28.207287    5050 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:06:28.207319    5050 start.go:340] cluster config:
	{Name:force-systemd-env-177000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:06:28.210982    5050 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:06:28.219057    5050 out.go:177] * Starting "force-systemd-env-177000" primary control-plane node in "force-systemd-env-177000" cluster
	I0910 11:06:28.223184    5050 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:06:28.223204    5050 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:06:28.223213    5050 cache.go:56] Caching tarball of preloaded images
	I0910 11:06:28.223277    5050 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:06:28.223283    5050 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:06:28.223355    5050 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/force-systemd-env-177000/config.json ...
	I0910 11:06:28.223367    5050 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/force-systemd-env-177000/config.json: {Name:mke385b317e40ba2a455d0301c9ed21b10968505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:06:28.223596    5050 start.go:360] acquireMachinesLock for force-systemd-env-177000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:28.223631    5050 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "force-systemd-env-177000"
	I0910 11:06:28.223643    5050 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-177000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:28.223672    5050 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:28.232085    5050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:28.249694    5050 start.go:159] libmachine.API.Create for "force-systemd-env-177000" (driver="qemu2")
	I0910 11:06:28.249728    5050 client.go:168] LocalClient.Create starting
	I0910 11:06:28.249792    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:28.249825    5050 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:28.249835    5050 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:28.249877    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:28.249899    5050 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:28.249908    5050 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:28.250251    5050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:28.409803    5050 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:28.612333    5050 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:28.612341    5050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:28.612557    5050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I0910 11:06:28.621885    5050 main.go:141] libmachine: STDOUT: 
	I0910 11:06:28.621903    5050 main.go:141] libmachine: STDERR: 
	I0910 11:06:28.621954    5050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2 +20000M
	I0910 11:06:28.630044    5050 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:28.630060    5050 main.go:141] libmachine: STDERR: 
	I0910 11:06:28.630077    5050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I0910 11:06:28.630080    5050 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:28.630096    5050 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:28.630130    5050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:93:06:5a:1b:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I0910 11:06:28.631747    5050 main.go:141] libmachine: STDOUT: 
	I0910 11:06:28.631762    5050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:28.631781    5050 client.go:171] duration metric: took 382.058ms to LocalClient.Create
	I0910 11:06:30.633913    5050 start.go:128] duration metric: took 2.410290667s to createHost
	I0910 11:06:30.633999    5050 start.go:83] releasing machines lock for "force-systemd-env-177000", held for 2.410421917s
	W0910 11:06:30.634042    5050 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:30.644500    5050 out.go:177] * Deleting "force-systemd-env-177000" in qemu2 ...
	W0910 11:06:30.671436    5050 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:30.671460    5050 start.go:729] Will try again in 5 seconds ...
	I0910 11:06:35.671481    5050 start.go:360] acquireMachinesLock for force-systemd-env-177000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:06:35.671642    5050 start.go:364] duration metric: took 115.333µs to acquireMachinesLock for "force-systemd-env-177000"
	I0910 11:06:35.671679    5050 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-177000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:06:35.671753    5050 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:06:35.684757    5050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 11:06:35.706068    5050 start.go:159] libmachine.API.Create for "force-systemd-env-177000" (driver="qemu2")
	I0910 11:06:35.706106    5050 client.go:168] LocalClient.Create starting
	I0910 11:06:35.706194    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:06:35.706241    5050 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:35.706250    5050 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:35.706295    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:06:35.706327    5050 main.go:141] libmachine: Decoding PEM data...
	I0910 11:06:35.706339    5050 main.go:141] libmachine: Parsing certificate...
	I0910 11:06:35.706664    5050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:06:36.059151    5050 main.go:141] libmachine: Creating SSH key...
	I0910 11:06:36.105566    5050 main.go:141] libmachine: Creating Disk image...
	I0910 11:06:36.105572    5050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:06:36.105774    5050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I0910 11:06:36.114891    5050 main.go:141] libmachine: STDOUT: 
	I0910 11:06:36.114912    5050 main.go:141] libmachine: STDERR: 
	I0910 11:06:36.114968    5050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2 +20000M
	I0910 11:06:36.122821    5050 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:06:36.122837    5050 main.go:141] libmachine: STDERR: 
	I0910 11:06:36.122847    5050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I0910 11:06:36.122850    5050 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:06:36.122862    5050 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:06:36.122895    5050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:b2:99:b4:02:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I0910 11:06:36.124470    5050 main.go:141] libmachine: STDOUT: 
	I0910 11:06:36.124496    5050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:06:36.124508    5050 client.go:171] duration metric: took 418.39925ms to LocalClient.Create
	I0910 11:06:38.126034    5050 start.go:128] duration metric: took 2.45432125s to createHost
	I0910 11:06:38.126102    5050 start.go:83] releasing machines lock for "force-systemd-env-177000", held for 2.454509667s
	W0910 11:06:38.126399    5050 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:06:38.144117    5050 out.go:201] 
	W0910 11:06:38.151150    5050 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:06:38.151188    5050 out.go:270] * 
	* 
	W0910 11:06:38.153713    5050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:06:38.162074    5050 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-177000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-177000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-177000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (91.564209ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-177000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-177000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-10 11:06:38.265431 -0700 PDT m=+2297.727803960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-177000 -n force-systemd-env-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-177000 -n force-systemd-env-177000: exit status 7 (37.811916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-177000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-177000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-177000
--- FAIL: TestForceSystemdEnv (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (40.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-475000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-475000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-qd6cf" [aca21656-a313-43c4-a6d8-17b3e35fba6a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-qd6cf" [aca21656-a313-43c4-a6d8-17b3e35fba6a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.010672541s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32163
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
E0910 10:47:48.525807    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32163: Get "http://192.168.105.4:32163": dial tcp 192.168.105.4:32163: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-475000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-qd6cf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-475000/192.168.105.4
Start Time:       Tue, 10 Sep 2024 10:47:31 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://2d9432059a25a7c94f20e690e3b7239c127c7d477b85d68834f45c938d00ee25
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 10 Sep 2024 10:47:52 -0700
Finished:     Tue, 10 Sep 2024 10:47:52 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8k8mx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8k8mx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  39s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-qd6cf to functional-475000
Normal   Pulling    39s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     34s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.802s (4.802s including waiting). Image size: 84957542 bytes.
Normal   Created    18s (x3 over 34s)  kubelet            Created container echoserver-arm
Normal   Started    18s (x3 over 34s)  kubelet            Started container echoserver-arm
Normal   Pulled     18s (x2 over 33s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    3s (x4 over 32s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-qd6cf_default(aca21656-a313-43c4-a6d8-17b3e35fba6a)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-475000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-475000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.228.198
IPs:                      10.107.228.198
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32163/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-475000 -n functional-475000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 logs -n 25
2024/09/10 10:48:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service   | functional-475000 service                                                                                            | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh findmnt                                                                                        | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1927916988/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh findmnt                                                                                        | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh -- ls                                                                                          | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh cat                                                                                            | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | /mount-9p/test-1725990471988822000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh stat                                                                                           | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh stat                                                                                           | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh sudo                                                                                           | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh findmnt                                                                                        | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2407826175/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh findmnt                                                                                        | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh -- ls                                                                                          | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:47 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh sudo                                                                                           | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh findmnt                                                                                        | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT | 10 Sep 24 10:48 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:47 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh findmnt                                                                                        | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:48 PDT | 10 Sep 24 10:48 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-475000 ssh findmnt                                                                                        | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:48 PDT | 10 Sep 24 10:48 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:48 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-475000                                                                                                 | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-475000 --dry-run                                                                                       | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:48 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-475000 | jenkins | v1.34.0 | 10 Sep 24 10:48 PDT |                     |
	|           | -p functional-475000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 10:48:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 10:48:00.931412    2953 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:48:00.934357    2953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:48:00.934361    2953 out.go:358] Setting ErrFile to fd 2...
	I0910 10:48:00.934364    2953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:48:00.934515    2953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:48:00.938642    2953 out.go:352] Setting JSON to false
	I0910 10:48:00.955489    2953 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2844,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 10:48:00.955549    2953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 10:48:00.960381    2953 out.go:177] * [functional-475000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 10:48:00.968295    2953 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 10:48:00.968370    2953 notify.go:220] Checking for updates...
	I0910 10:48:00.975339    2953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:48:00.978292    2953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 10:48:00.981324    2953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 10:48:00.984244    2953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 10:48:00.987326    2953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 10:48:00.990666    2953 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:48:00.990921    2953 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 10:48:00.995324    2953 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 10:48:01.002326    2953 start.go:297] selected driver: qemu2
	I0910 10:48:01.002336    2953 start.go:901] validating driver "qemu2" against &{Name:functional-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:48:01.002393    2953 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 10:48:01.004653    2953 cni.go:84] Creating CNI manager for ""
	I0910 10:48:01.004670    2953 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 10:48:01.004725    2953 start.go:340] cluster config:
	{Name:functional-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-475000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:48:01.017349    2953 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 10 17:48:01 functional-475000 dockerd[6043]: time="2024-09-10T17:48:01.970717372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:48:01 functional-475000 dockerd[6043]: time="2024-09-10T17:48:01.970782959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:48:01 functional-475000 dockerd[6043]: time="2024-09-10T17:48:01.989763203Z" level=info msg="shim disconnected" id=dd3ada909d53810d3c8eb611c7404ea4ad09be3cc337f6f29ed56f3e17cff875 namespace=moby
	Sep 10 17:48:01 functional-475000 dockerd[6043]: time="2024-09-10T17:48:01.989831457Z" level=warning msg="cleaning up after shim disconnected" id=dd3ada909d53810d3c8eb611c7404ea4ad09be3cc337f6f29ed56f3e17cff875 namespace=moby
	Sep 10 17:48:01 functional-475000 dockerd[6043]: time="2024-09-10T17:48:01.989848458Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:48:01 functional-475000 dockerd[6036]: time="2024-09-10T17:48:01.990042720Z" level=info msg="ignoring event" container=dd3ada909d53810d3c8eb611c7404ea4ad09be3cc337f6f29ed56f3e17cff875 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:48:02 functional-475000 dockerd[6043]: time="2024-09-10T17:48:02.011545285Z" level=warning msg="cleanup warnings time=\"2024-09-10T17:48:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 17:48:02 functional-475000 dockerd[6043]: time="2024-09-10T17:48:02.014232408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 17:48:02 functional-475000 dockerd[6043]: time="2024-09-10T17:48:02.014342914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 17:48:02 functional-475000 dockerd[6043]: time="2024-09-10T17:48:02.014373124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:48:02 functional-475000 dockerd[6043]: time="2024-09-10T17:48:02.014692561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:48:02 functional-475000 cri-dockerd[6296]: time="2024-09-10T17:48:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae0b4ba61634273811b5f8a798d2b8f35a1c2b2d009ab6e1ee9668f93dd6aa43/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 17:48:02 functional-475000 cri-dockerd[6296]: time="2024-09-10T17:48:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/400327eaf2fa015d0d55d8c191cbd600d1d15a6c518fc61bdc1cb66304711f19/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 17:48:02 functional-475000 dockerd[6036]: time="2024-09-10T17:48:02.285376407Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 10 17:48:04 functional-475000 cri-dockerd[6296]: time="2024-09-10T17:48:04Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 10 17:48:04 functional-475000 dockerd[6043]: time="2024-09-10T17:48:04.090541639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 17:48:04 functional-475000 dockerd[6043]: time="2024-09-10T17:48:04.090596434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 17:48:04 functional-475000 dockerd[6043]: time="2024-09-10T17:48:04.090610643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:48:04 functional-475000 dockerd[6043]: time="2024-09-10T17:48:04.090645729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:48:04 functional-475000 dockerd[6036]: time="2024-09-10T17:48:04.253413755Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 10 17:48:09 functional-475000 cri-dockerd[6296]: time="2024-09-10T17:48:09Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 10 17:48:09 functional-475000 dockerd[6043]: time="2024-09-10T17:48:09.702047602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 17:48:09 functional-475000 dockerd[6043]: time="2024-09-10T17:48:09.702113022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 17:48:09 functional-475000 dockerd[6043]: time="2024-09-10T17:48:09.702121148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:48:09 functional-475000 dockerd[6043]: time="2024-09-10T17:48:09.702152816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	7c92cb867c6aa       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 seconds ago        Running             kubernetes-dashboard        0                   400327eaf2fa0       kubernetes-dashboard-695b96c756-p79k8
	fd45f8373a09e       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   7 seconds ago        Running             dashboard-metrics-scraper   0                   ae0b4ba616342       dashboard-metrics-scraper-c5db448b4-pfc5t
	dd3ada909d538       72565bf5bbedf                                                                                          10 seconds ago       Exited              echoserver-arm              2                   e55b0b8878994       hello-node-64b4f8f9ff-9prtd
	6eeb9caea8261       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    15 seconds ago       Exited              mount-munger                0                   a201eb5934ef1       busybox-mount
	2d9432059a25a       72565bf5bbedf                                                                                          19 seconds ago       Exited              echoserver-arm              2                   2453115ebe84d       hello-node-connect-65d86f57f4-qd6cf
	54e8711dac1c3       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          33 seconds ago       Running             myfrontend                  0                   7eeb8a1e924cd       sp-pod
	39f67e3ed2123       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          47 seconds ago       Running             nginx                       0                   61ac0bae3b223       nginx-svc
	1fa349b85817b       2437cf7621777                                                                                          About a minute ago   Running             coredns                     2                   cdaabd630c438       coredns-6f6b679f8f-gszvc
	cfeb92f2bdf5b       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         3                   37d4e5ee8e98d       storage-provisioner
	48255ff2837a5       71d55d66fd4ee                                                                                          About a minute ago   Running             kube-proxy                  2                   c095200024919       kube-proxy-bpbbn
	cd35900f4309b       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   11d997a919128       etcd-functional-475000
	165b6af675ce2       fcb0683e6bdbd                                                                                          About a minute ago   Running             kube-controller-manager     2                   f86908d181f31       kube-controller-manager-functional-475000
	9a27a37ccce11       fbbbd428abb4d                                                                                          About a minute ago   Running             kube-scheduler              2                   0816ae3bd442c       kube-scheduler-functional-475000
	d66c79967b192       cd0f0ae0ec9e0                                                                                          About a minute ago   Running             kube-apiserver              0                   b1e176c8eb4ae       kube-apiserver-functional-475000
	ab442f1caf1ba       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         2                   6341a967bd1dc       storage-provisioner
	b31a7590b50f4       2437cf7621777                                                                                          2 minutes ago        Exited              coredns                     1                   6dd8af27cfa26       coredns-6f6b679f8f-gszvc
	abe6c45f4ef5c       71d55d66fd4ee                                                                                          2 minutes ago        Exited              kube-proxy                  1                   88950c76aba24       kube-proxy-bpbbn
	270d0659a50c3       27e3830e14027                                                                                          2 minutes ago        Exited              etcd                        1                   887a144afe33b       etcd-functional-475000
	4b45322f5d0cb       fcb0683e6bdbd                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   45e6ddfec7654       kube-controller-manager-functional-475000
	fbbdaea2cab56       fbbbd428abb4d                                                                                          2 minutes ago        Exited              kube-scheduler              1                   fa1ea3e7dd052       kube-scheduler-functional-475000
	
	
	==> coredns [1fa349b85817] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55599 - 7829 "HINFO IN 3453894030970443615.2183872578334871128. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00504058s
	[INFO] 10.244.0.1:2911 - 41520 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000087089s
	[INFO] 10.244.0.1:12627 - 31916 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000101757s
	[INFO] 10.244.0.1:33901 - 51855 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000031044s
	[INFO] 10.244.0.1:8019 - 7412 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001125991s
	[INFO] 10.244.0.1:21999 - 32767 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000060129s
	[INFO] 10.244.0.1:24638 - 9428 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000142218s
	
	
	==> coredns [b31a7590b50f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56391 - 40389 "HINFO IN 9072361897402976075.7858225347418940381. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010251691s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-475000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-475000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=functional-475000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T10_45_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:45:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-475000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:48:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:47:58 +0000   Tue, 10 Sep 2024 17:45:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:47:58 +0000   Tue, 10 Sep 2024 17:45:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:47:58 +0000   Tue, 10 Sep 2024 17:45:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:47:58 +0000   Tue, 10 Sep 2024 17:45:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-475000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 c28bde55a0d741a9b9b21957ca36d85b
	  System UUID:                c28bde55a0d741a9b9b21957ca36d85b
	  Boot ID:                    bf15a2fc-356e-4e52-b118-dbd9d5393303
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-9prtd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     hello-node-connect-65d86f57f4-qd6cf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 coredns-6f6b679f8f-gszvc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m55s
	  kube-system                 etcd-functional-475000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m1s
	  kube-system                 kube-apiserver-functional-475000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-functional-475000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 kube-proxy-bpbbn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-scheduler-functional-475000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-pfc5t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-p79k8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m54s                kube-proxy       
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 2m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m5s (x8 over 3m5s)  kubelet          Node functional-475000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x8 over 3m5s)  kubelet          Node functional-475000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x7 over 3m5s)  kubelet          Node functional-475000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m1s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m1s                 kubelet          Node functional-475000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s                 kubelet          Node functional-475000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s                 kubelet          Node functional-475000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m57s                kubelet          Node functional-475000 status is now: NodeReady
	  Normal  RegisteredNode           2m56s                node-controller  Node functional-475000 event: Registered Node functional-475000 in Controller
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node functional-475000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node functional-475000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node functional-475000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                   node-controller  Node functional-475000 event: Registered Node functional-475000 in Controller
	  Normal  Starting                 78s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    77s (x8 over 78s)    kubelet          Node functional-475000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 78s)    kubelet          Node functional-475000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  77s (x8 over 78s)    kubelet          Node functional-475000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           71s                  node-controller  Node functional-475000 event: Registered Node functional-475000 in Controller
	
	
	==> dmesg <==
	[ +14.750520] kauditd_printk_skb: 36 callbacks suppressed
	[  +2.313357] systemd-fstab-generator[5127]: Ignoring "noauto" option for root device
	[ +13.725755] systemd-fstab-generator[5570]: Ignoring "noauto" option for root device
	[  +0.054795] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.098334] systemd-fstab-generator[5603]: Ignoring "noauto" option for root device
	[  +0.091859] systemd-fstab-generator[5615]: Ignoring "noauto" option for root device
	[  +0.091684] systemd-fstab-generator[5629]: Ignoring "noauto" option for root device
	[  +5.126089] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.375065] systemd-fstab-generator[6245]: Ignoring "noauto" option for root device
	[  +0.082407] systemd-fstab-generator[6257]: Ignoring "noauto" option for root device
	[  +0.075911] systemd-fstab-generator[6269]: Ignoring "noauto" option for root device
	[  +0.114581] systemd-fstab-generator[6284]: Ignoring "noauto" option for root device
	[  +0.211832] systemd-fstab-generator[6452]: Ignoring "noauto" option for root device
	[  +0.914100] systemd-fstab-generator[6573]: Ignoring "noauto" option for root device
	[  +4.432099] kauditd_printk_skb: 199 callbacks suppressed
	[Sep10 17:47] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.607543] systemd-fstab-generator[7591]: Ignoring "noauto" option for root device
	[  +5.344770] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.422664] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.165311] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.571833] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.001405] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.134069] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.018374] kauditd_printk_skb: 17 callbacks suppressed
	[Sep10 17:48] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [270d0659a50c] <==
	{"level":"info","ts":"2024-09-10T17:46:07.704929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T17:46:07.705002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-10T17:46:07.705045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T17:46:07.705062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-10T17:46:07.705087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-10T17:46:07.705110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-10T17:46:07.710278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:46:07.710272Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-475000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T17:46:07.710872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:46:07.711189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T17:46:07.711213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T17:46:07.712269Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:46:07.712345Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:46:07.714577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T17:46:07.714578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-10T17:46:39.977671Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-10T17:46:39.977699Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-475000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-10T17:46:39.977737Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T17:46:39.977748Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T17:46:39.977767Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T17:46:39.977799Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-10T17:46:39.992823Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-10T17:46:39.994431Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-10T17:46:39.994468Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-10T17:46:39.994472Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-475000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [cd35900f4309] <==
	{"level":"info","ts":"2024-09-10T17:46:55.005325Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-10T17:46:55.005378Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:46:55.005407Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:46:55.006391Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:46:55.006989Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T17:46:55.007077Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-10T17:46:55.007099Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-10T17:46:55.007624Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T17:46:55.007655Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T17:46:56.704835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-10T17:46:56.705102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-10T17:46:56.705167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-10T17:46:56.705203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-10T17:46:56.705221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-10T17:46:56.705249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-10T17:46:56.705366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-10T17:46:56.709890Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-475000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T17:46:56.710186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:46:56.710228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:46:56.710956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T17:46:56.711507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T17:46:56.713284Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:46:56.713779Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:46:56.715256Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T17:46:56.732787Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 17:48:11 up 3 min,  0 users,  load average: 0.88, 0.53, 0.22
	Linux functional-475000 5.10.207 #1 SMP PREEMPT Mon Sep 9 22:12:33 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d66c79967b19] <==
	I0910 17:46:57.312920       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 17:46:57.312934       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 17:46:57.316303       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0910 17:46:57.321894       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 17:46:57.321952       1 aggregator.go:171] initial CRD sync complete...
	I0910 17:46:57.321977       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 17:46:57.321985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 17:46:57.321987       1 cache.go:39] Caches are synced for autoregister controller
	I0910 17:46:57.345075       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 17:46:58.216012       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0910 17:46:58.535293       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 17:46:58.539215       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 17:46:58.554254       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 17:46:58.561333       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 17:46:58.563409       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 17:47:00.582108       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 17:47:00.836811       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 17:47:16.316416       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.160.169"}
	I0910 17:47:21.084259       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.77.60"}
	I0910 17:47:31.510147       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0910 17:47:31.554099       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.228.198"}
	I0910 17:47:44.833849       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.142.82"}
	I0910 17:48:01.573497       1 controller.go:615] quota admission added evaluator for: namespaces
	I0910 17:48:01.651554       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.149.181"}
	I0910 17:48:01.665768       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.174.186"}
	
	
	==> kube-controller-manager [165b6af675ce] <==
	I0910 17:48:01.606350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.301644ms"
	E0910 17:48:01.606572       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0910 17:48:01.610321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.244179ms"
	E0910 17:48:01.610338       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0910 17:48:01.612489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.487983ms"
	E0910 17:48:01.612507       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0910 17:48:01.617396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.395124ms"
	E0910 17:48:01.617437       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0910 17:48:01.617462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.762208ms"
	E0910 17:48:01.617468       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0910 17:48:01.631503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.834334ms"
	I0910 17:48:01.641382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.828559ms"
	I0910 17:48:01.641428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="27.085µs"
	I0910 17:48:01.645680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.517191ms"
	I0910 17:48:01.653242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.535127ms"
	I0910 17:48:01.653316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="32.002µs"
	I0910 17:48:01.657313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.959µs"
	I0910 17:48:01.918021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="36.461µs"
	I0910 17:48:02.023273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="26.835µs"
	I0910 17:48:03.109250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="37.21µs"
	I0910 17:48:04.139950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.501917ms"
	I0910 17:48:04.141009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.668µs"
	I0910 17:48:07.914323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.252µs"
	I0910 17:48:10.218711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.854714ms"
	I0910 17:48:10.218774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.96µs"
	
	
	==> kube-controller-manager [4b45322f5d0c] <==
	I0910 17:46:11.597582       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0910 17:46:11.597630       1 shared_informer.go:320] Caches are synced for service account
	I0910 17:46:11.597680       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0910 17:46:11.597710       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0910 17:46:11.597740       1 shared_informer.go:320] Caches are synced for crt configmap
	I0910 17:46:11.597754       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="23.835µs"
	I0910 17:46:11.597710       1 shared_informer.go:320] Caches are synced for PVC protection
	I0910 17:46:11.597846       1 shared_informer.go:320] Caches are synced for endpoint
	I0910 17:46:11.597577       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0910 17:46:11.599073       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0910 17:46:11.599106       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0910 17:46:11.599095       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0910 17:46:11.599150       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0910 17:46:11.747796       1 shared_informer.go:320] Caches are synced for attach detach
	I0910 17:46:11.773450       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 17:46:11.782144       1 shared_informer.go:320] Caches are synced for taint
	I0910 17:46:11.782209       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0910 17:46:11.782249       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-475000"
	I0910 17:46:11.782291       1 shared_informer.go:320] Caches are synced for daemon sets
	I0910 17:46:11.782297       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0910 17:46:11.800106       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 17:46:12.209111       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 17:46:12.297525       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 17:46:12.297609       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0910 17:46:39.047336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-475000"
	
	
	==> kube-proxy [48255ff2837a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:46:58.467701       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:46:58.471039       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0910 17:46:58.471073       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:46:58.489295       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:46:58.489318       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:46:58.489333       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:46:58.490779       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:46:58.490883       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:46:58.490889       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:46:58.491436       1 config.go:197] "Starting service config controller"
	I0910 17:46:58.491442       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:46:58.491501       1 config.go:326] "Starting node config controller"
	I0910 17:46:58.491505       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:46:58.495106       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:46:58.495698       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:46:58.495704       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 17:46:58.592295       1 shared_informer.go:320] Caches are synced for node config
	I0910 17:46:58.596392       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [abe6c45f4ef5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:46:08.946227       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:46:08.964535       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0910 17:46:08.964570       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:46:08.987435       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:46:08.987456       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:46:08.987471       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:46:08.989064       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:46:08.989176       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:46:08.989181       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:46:08.989788       1 config.go:197] "Starting service config controller"
	I0910 17:46:08.989794       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:46:08.989801       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:46:08.989803       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:46:08.989919       1 config.go:326] "Starting node config controller"
	I0910 17:46:08.989921       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:46:09.090079       1 shared_informer.go:320] Caches are synced for node config
	I0910 17:46:09.090079       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:46:09.090097       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9a27a37ccce1] <==
	I0910 17:46:55.525726       1 serving.go:386] Generated self-signed cert in-memory
	W0910 17:46:57.225514       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 17:46:57.225529       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 17:46:57.225534       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 17:46:57.225537       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 17:46:57.253219       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 17:46:57.258098       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:46:57.259192       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 17:46:57.262136       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 17:46:57.262383       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 17:46:57.263033       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 17:46:57.365763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fbbdaea2cab5] <==
	I0910 17:46:06.885471       1 serving.go:386] Generated self-signed cert in-memory
	W0910 17:46:08.246337       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 17:46:08.246354       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 17:46:08.246360       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 17:46:08.246363       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 17:46:08.269468       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 17:46:08.269565       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:46:08.271052       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 17:46:08.271070       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 17:46:08.271207       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 17:46:08.271255       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 17:46:08.372023       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 17:46:39.963875       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0910 17:46:39.963904       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0910 17:46:39.963965       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0910 17:46:39.964050       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 10 17:47:58 functional-475000 kubelet[6580]: I0910 17:47:58.197621    6580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jtgg\" (UniqueName: \"kubernetes.io/projected/32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde-kube-api-access-7jtgg\") pod \"32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde\" (UID: \"32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde\") "
	Sep 10 17:47:58 functional-475000 kubelet[6580]: I0910 17:47:58.197648    6580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde-test-volume\") pod \"32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde\" (UID: \"32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde\") "
	Sep 10 17:47:58 functional-475000 kubelet[6580]: I0910 17:47:58.197706    6580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde-test-volume" (OuterVolumeSpecName: "test-volume") pod "32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde" (UID: "32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 10 17:47:58 functional-475000 kubelet[6580]: I0910 17:47:58.200581    6580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde-kube-api-access-7jtgg" (OuterVolumeSpecName: "kube-api-access-7jtgg") pod "32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde" (UID: "32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde"). InnerVolumeSpecName "kube-api-access-7jtgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:47:58 functional-475000 kubelet[6580]: I0910 17:47:58.297775    6580 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7jtgg\" (UniqueName: \"kubernetes.io/projected/32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde-kube-api-access-7jtgg\") on node \"functional-475000\" DevicePath \"\""
	Sep 10 17:47:58 functional-475000 kubelet[6580]: I0910 17:47:58.297793    6580 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde-test-volume\") on node \"functional-475000\" DevicePath \"\""
	Sep 10 17:47:58 functional-475000 kubelet[6580]: I0910 17:47:58.958914    6580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a201eb5934ef1da544944ac79e31bc1fe12206678523162a5a9d8899a37586ea"
	Sep 10 17:48:01 functional-475000 kubelet[6580]: E0910 17:48:01.631129    6580 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde" containerName="mount-munger"
	Sep 10 17:48:01 functional-475000 kubelet[6580]: I0910 17:48:01.631162    6580 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde" containerName="mount-munger"
	Sep 10 17:48:01 functional-475000 kubelet[6580]: I0910 17:48:01.725460    6580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dvxc\" (UniqueName: \"kubernetes.io/projected/f68c6d2f-8c38-4f7a-b87c-7cc9f164a3b0-kube-api-access-9dvxc\") pod \"dashboard-metrics-scraper-c5db448b4-pfc5t\" (UID: \"f68c6d2f-8c38-4f7a-b87c-7cc9f164a3b0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-pfc5t"
	Sep 10 17:48:01 functional-475000 kubelet[6580]: I0910 17:48:01.725480    6580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f68c6d2f-8c38-4f7a-b87c-7cc9f164a3b0-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-pfc5t\" (UID: \"f68c6d2f-8c38-4f7a-b87c-7cc9f164a3b0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-pfc5t"
	Sep 10 17:48:01 functional-475000 kubelet[6580]: I0910 17:48:01.725490    6580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz67h\" (UniqueName: \"kubernetes.io/projected/82538be3-e591-4ca1-814c-a62044ae7994-kube-api-access-cz67h\") pod \"kubernetes-dashboard-695b96c756-p79k8\" (UID: \"82538be3-e591-4ca1-814c-a62044ae7994\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-p79k8"
	Sep 10 17:48:01 functional-475000 kubelet[6580]: I0910 17:48:01.725500    6580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/82538be3-e591-4ca1-814c-a62044ae7994-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-p79k8\" (UID: \"82538be3-e591-4ca1-814c-a62044ae7994\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-p79k8"
	Sep 10 17:48:01 functional-475000 kubelet[6580]: I0910 17:48:01.909202    6580 scope.go:117] "RemoveContainer" containerID="d561bf517d0b579decdf3e4e8c7992c420dd48eed3b8fea97937bc7d72c660bf"
	Sep 10 17:48:02 functional-475000 kubelet[6580]: I0910 17:48:02.019172    6580 scope.go:117] "RemoveContainer" containerID="dd3ada909d53810d3c8eb611c7404ea4ad09be3cc337f6f29ed56f3e17cff875"
	Sep 10 17:48:02 functional-475000 kubelet[6580]: E0910 17:48:02.019254    6580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-9prtd_default(aba5f1eb-b400-4ffb-ba7c-42e959e1e05b)\"" pod="default/hello-node-64b4f8f9ff-9prtd" podUID="aba5f1eb-b400-4ffb-ba7c-42e959e1e05b"
	Sep 10 17:48:02 functional-475000 kubelet[6580]: I0910 17:48:02.041587    6580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae0b4ba61634273811b5f8a798d2b8f35a1c2b2d009ab6e1ee9668f93dd6aa43"
	Sep 10 17:48:02 functional-475000 kubelet[6580]: I0910 17:48:02.076365    6580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400327eaf2fa015d0d55d8c191cbd600d1d15a6c518fc61bdc1cb66304711f19"
	Sep 10 17:48:03 functional-475000 kubelet[6580]: I0910 17:48:03.100678    6580 scope.go:117] "RemoveContainer" containerID="d561bf517d0b579decdf3e4e8c7992c420dd48eed3b8fea97937bc7d72c660bf"
	Sep 10 17:48:03 functional-475000 kubelet[6580]: I0910 17:48:03.101133    6580 scope.go:117] "RemoveContainer" containerID="dd3ada909d53810d3c8eb611c7404ea4ad09be3cc337f6f29ed56f3e17cff875"
	Sep 10 17:48:03 functional-475000 kubelet[6580]: E0910 17:48:03.101261    6580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-9prtd_default(aba5f1eb-b400-4ffb-ba7c-42e959e1e05b)\"" pod="default/hello-node-64b4f8f9ff-9prtd" podUID="aba5f1eb-b400-4ffb-ba7c-42e959e1e05b"
	Sep 10 17:48:04 functional-475000 kubelet[6580]: I0910 17:48:04.138526    6580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-pfc5t" podStartSLOduration=1.171550159 podStartE2EDuration="3.138516163s" podCreationTimestamp="2024-09-10 17:48:01 +0000 UTC" firstStartedPulling="2024-09-10 17:48:02.070557853 +0000 UTC m=+68.228727661" lastFinishedPulling="2024-09-10 17:48:04.037523857 +0000 UTC m=+70.195693665" observedRunningTime="2024-09-10 17:48:04.13846766 +0000 UTC m=+70.296637510" watchObservedRunningTime="2024-09-10 17:48:04.138516163 +0000 UTC m=+70.296685971"
	Sep 10 17:48:07 functional-475000 kubelet[6580]: I0910 17:48:07.908087    6580 scope.go:117] "RemoveContainer" containerID="2d9432059a25a7c94f20e690e3b7239c127c7d477b85d68834f45c938d00ee25"
	Sep 10 17:48:07 functional-475000 kubelet[6580]: E0910 17:48:07.908164    6580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-qd6cf_default(aca21656-a313-43c4-a6d8-17b3e35fba6a)\"" pod="default/hello-node-connect-65d86f57f4-qd6cf" podUID="aca21656-a313-43c4-a6d8-17b3e35fba6a"
	Sep 10 17:48:10 functional-475000 kubelet[6580]: I0910 17:48:10.210348    6580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-p79k8" podStartSLOduration=1.882795317 podStartE2EDuration="9.210327655s" podCreationTimestamp="2024-09-10 17:48:01 +0000 UTC" firstStartedPulling="2024-09-10 17:48:02.096384266 +0000 UTC m=+68.254554074" lastFinishedPulling="2024-09-10 17:48:09.423916604 +0000 UTC m=+75.582086412" observedRunningTime="2024-09-10 17:48:10.209741036 +0000 UTC m=+76.367910927" watchObservedRunningTime="2024-09-10 17:48:10.210327655 +0000 UTC m=+76.368497546"
	
	
	==> kubernetes-dashboard [7c92cb867c6a] <==
	2024/09/10 17:48:09 Using namespace: kubernetes-dashboard
	2024/09/10 17:48:09 Using in-cluster config to connect to apiserver
	2024/09/10 17:48:09 Using secret token for csrf signing
	2024/09/10 17:48:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/10 17:48:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/10 17:48:09 Successful initial request to the apiserver, version: v1.31.0
	2024/09/10 17:48:09 Generating JWE encryption key
	2024/09/10 17:48:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/10 17:48:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/10 17:48:09 Initializing JWE encryption key from synchronized object
	2024/09/10 17:48:09 Creating in-cluster Sidecar client
	2024/09/10 17:48:09 Serving insecurely on HTTP port: 9090
	2024/09/10 17:48:09 Successful request to sidecar
	2024/09/10 17:48:09 Starting overwatch
	
	
	==> storage-provisioner [ab442f1caf1b] <==
	I0910 17:46:23.530425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:46:23.534666       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:46:23.534740       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [cfeb92f2bdf5] <==
	I0910 17:46:58.443817       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:46:58.457394       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:46:58.457411       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:47:15.866372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:47:15.866645       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00a3a283-a7f0-46b9-a3a5-c59fdb8c92c0", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-475000_02878e91-927a-427f-a089-ce3c6a4b9195 became leader
	I0910 17:47:15.866697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-475000_02878e91-927a-427f-a089-ce3c6a4b9195!
	I0910 17:47:15.970422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-475000_02878e91-927a-427f-a089-ce3c6a4b9195!
	I0910 17:47:25.962070       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0910 17:47:25.962133       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    24be34a4-1723-45ee-9bcd-c93c0a2edfb8 371 0 2024-09-10 17:45:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-10 17:45:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-359016e0-fab1-44a2-b83d-93175ed1301e &PersistentVolumeClaim{ObjectMeta:{myclaim  default  359016e0-fab1-44a2-b83d-93175ed1301e 724 0 2024-09-10 17:47:25 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-10 17:47:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-10 17:47:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0910 17:47:25.962851       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"359016e0-fab1-44a2-b83d-93175ed1301e", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0910 17:47:25.966236       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-359016e0-fab1-44a2-b83d-93175ed1301e" provisioned
	I0910 17:47:25.966327       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0910 17:47:25.966361       1 volume_store.go:212] Trying to save persistentvolume "pvc-359016e0-fab1-44a2-b83d-93175ed1301e"
	I0910 17:47:25.971388       1 volume_store.go:219] persistentvolume "pvc-359016e0-fab1-44a2-b83d-93175ed1301e" saved
	I0910 17:47:25.972148       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"359016e0-fab1-44a2-b83d-93175ed1301e", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-359016e0-fab1-44a2-b83d-93175ed1301e
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-475000 -n functional-475000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-475000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-475000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-475000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-475000/192.168.105.4
	Start Time:       Tue, 10 Sep 2024 10:47:53 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://6eeb9caea8261b6b25d510802e0b9ed0667e5419c61ef747dd38fb757e4347f7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 10 Sep 2024 10:47:56 -0700
	      Finished:     Tue, 10 Sep 2024 10:47:56 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7jtgg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7jtgg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  18s   default-scheduler  Successfully assigned default/busybox-mount to functional-475000
	  Normal  Pulling    18s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.855s (2.855s including waiting). Image size: 3547125 bytes.
	  Normal  Created    15s   kubelet            Created container mount-munger
	  Normal  Started    15s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (40.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (115.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 node stop m02 -v=7 --alsologtostderr
E0910 10:53:01.668373    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-080000 node stop m02 -v=7 --alsologtostderr: (12.192472208s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr
E0910 10:53:42.630135    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr: exit status 7 (1m17.812359583s)

                                                
                                                
-- stdout --
	ha-080000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 10:53:07.320105    3542 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:53:07.320262    3542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:53:07.320265    3542 out.go:358] Setting ErrFile to fd 2...
	I0910 10:53:07.320267    3542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:53:07.320409    3542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:53:07.320538    3542 out.go:352] Setting JSON to false
	I0910 10:53:07.320553    3542 mustload.go:65] Loading cluster: ha-080000
	I0910 10:53:07.320628    3542 notify.go:220] Checking for updates...
	I0910 10:53:07.320781    3542 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:53:07.320788    3542 status.go:255] checking status of ha-080000 ...
	I0910 10:53:07.321572    3542 status.go:330] ha-080000 host status = "Running" (err=<nil>)
	I0910 10:53:07.321582    3542 host.go:66] Checking if "ha-080000" exists ...
	I0910 10:53:07.321678    3542 host.go:66] Checking if "ha-080000" exists ...
	I0910 10:53:07.321791    3542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 10:53:07.321799    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/id_rsa Username:docker}
	W0910 10:53:33.247222    3542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0910 10:53:33.247380    3542 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0910 10:53:33.247413    3542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0910 10:53:33.247441    3542 status.go:257] ha-080000 status: &{Name:ha-080000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 10:53:33.247505    3542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0910 10:53:33.247520    3542 status.go:255] checking status of ha-080000-m02 ...
	I0910 10:53:33.247936    3542 status.go:330] ha-080000-m02 host status = "Stopped" (err=<nil>)
	I0910 10:53:33.247946    3542 status.go:343] host is not running, skipping remaining checks
	I0910 10:53:33.247950    3542 status.go:257] ha-080000-m02 status: &{Name:ha-080000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 10:53:33.247960    3542 status.go:255] checking status of ha-080000-m03 ...
	I0910 10:53:33.249744    3542 status.go:330] ha-080000-m03 host status = "Running" (err=<nil>)
	I0910 10:53:33.249754    3542 host.go:66] Checking if "ha-080000-m03" exists ...
	I0910 10:53:33.249918    3542 host.go:66] Checking if "ha-080000-m03" exists ...
	I0910 10:53:33.250049    3542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 10:53:33.250059    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m03/id_rsa Username:docker}
	W0910 10:53:59.171681    3542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0910 10:53:59.171733    3542 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0910 10:53:59.171743    3542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0910 10:53:59.171747    3542 status.go:257] ha-080000-m03 status: &{Name:ha-080000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 10:53:59.171755    3542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0910 10:53:59.171759    3542 status.go:255] checking status of ha-080000-m04 ...
	I0910 10:53:59.172418    3542 status.go:330] ha-080000-m04 host status = "Running" (err=<nil>)
	I0910 10:53:59.172425    3542 host.go:66] Checking if "ha-080000-m04" exists ...
	I0910 10:53:59.172533    3542 host.go:66] Checking if "ha-080000-m04" exists ...
	I0910 10:53:59.172657    3542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 10:53:59.172663    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m04/id_rsa Username:docker}
	W0910 10:54:25.095845    3542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0910 10:54:25.095886    3542 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0910 10:54:25.095905    3542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0910 10:54:25.095909    3542 status.go:257] ha-080000-m04 status: &{Name:ha-080000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0910 10:54:25.095917    3542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr": ha-080000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr": ha-080000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr": ha-080000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 3 (25.958659792s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 10:54:51.054220    3599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0910 10:54:51.054229    3599 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (115.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (53.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0910 10:55:04.550653    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (27.711765916s)
ha_test.go:413: expected profile "ha-080000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-080000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-080000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-080000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 3 (25.956173709s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 10:55:44.721081    3636 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0910 10:55:44.721096    3636 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (53.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (110.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-080000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.08118075s)

                                                
                                                
-- stdout --
	* Starting "ha-080000-m02" control-plane node in "ha-080000" cluster
	* Restarting existing qemu2 VM for "ha-080000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-080000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 10:55:44.754769    3658 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:55:44.755034    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:55:44.755041    3658 out.go:358] Setting ErrFile to fd 2...
	I0910 10:55:44.755043    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:55:44.755173    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:55:44.755415    3658 mustload.go:65] Loading cluster: ha-080000
	I0910 10:55:44.755648    3658 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0910 10:55:44.755886    3658 host.go:58] "ha-080000-m02" host status: Stopped
	I0910 10:55:44.760449    3658 out.go:177] * Starting "ha-080000-m02" control-plane node in "ha-080000" cluster
	I0910 10:55:44.764282    3658 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 10:55:44.764295    3658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 10:55:44.764301    3658 cache.go:56] Caching tarball of preloaded images
	I0910 10:55:44.764376    3658 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 10:55:44.764381    3658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 10:55:44.764433    3658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/ha-080000/config.json ...
	I0910 10:55:44.764864    3658 start.go:360] acquireMachinesLock for ha-080000-m02: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 10:55:44.764902    3658 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "ha-080000-m02"
	I0910 10:55:44.764909    3658 start.go:96] Skipping create...Using existing machine configuration
	I0910 10:55:44.764913    3658 fix.go:54] fixHost starting: m02
	I0910 10:55:44.765003    3658 fix.go:112] recreateIfNeeded on ha-080000-m02: state=Stopped err=<nil>
	W0910 10:55:44.765009    3658 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 10:55:44.768429    3658 out.go:177] * Restarting existing qemu2 VM for "ha-080000-m02" ...
	I0910 10:55:44.769670    3658 qemu.go:418] Using hvf for hardware acceleration
	I0910 10:55:44.769772    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e6:c3:f2:e2:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/disk.qcow2
	I0910 10:55:44.772602    3658 main.go:141] libmachine: STDOUT: 
	I0910 10:55:44.772680    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 10:55:44.772703    3658 fix.go:56] duration metric: took 7.789791ms for fixHost
	I0910 10:55:44.772706    3658 start.go:83] releasing machines lock for "ha-080000-m02", held for 7.800334ms
	W0910 10:55:44.772712    3658 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 10:55:44.772738    3658 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 10:55:44.772741    3658 start.go:729] Will try again in 5 seconds ...
	I0910 10:55:49.774660    3658 start.go:360] acquireMachinesLock for ha-080000-m02: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 10:55:49.774804    3658 start.go:364] duration metric: took 111.375µs to acquireMachinesLock for "ha-080000-m02"
	I0910 10:55:49.774862    3658 start.go:96] Skipping create...Using existing machine configuration
	I0910 10:55:49.774869    3658 fix.go:54] fixHost starting: m02
	I0910 10:55:49.775048    3658 fix.go:112] recreateIfNeeded on ha-080000-m02: state=Stopped err=<nil>
	W0910 10:55:49.775055    3658 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 10:55:49.778844    3658 out.go:177] * Restarting existing qemu2 VM for "ha-080000-m02" ...
	I0910 10:55:49.782855    3658 qemu.go:418] Using hvf for hardware acceleration
	I0910 10:55:49.782912    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e6:c3:f2:e2:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/disk.qcow2
	I0910 10:55:49.784855    3658 main.go:141] libmachine: STDOUT: 
	I0910 10:55:49.784873    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 10:55:49.784896    3658 fix.go:56] duration metric: took 10.027667ms for fixHost
	I0910 10:55:49.784900    3658 start.go:83] releasing machines lock for "ha-080000-m02", held for 10.07075ms
	W0910 10:55:49.784947    3658 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 10:55:49.788850    3658 out.go:201] 
	W0910 10:55:49.792931    3658 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 10:55:49.792936    3658 out.go:270] * 
	* 
	W0910 10:55:49.794645    3658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 10:55:49.798844    3658 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0910 10:55:44.754769    3658 out.go:345] Setting OutFile to fd 1 ...
I0910 10:55:44.755034    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:55:44.755041    3658 out.go:358] Setting ErrFile to fd 2...
I0910 10:55:44.755043    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:55:44.755173    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
I0910 10:55:44.755415    3658 mustload.go:65] Loading cluster: ha-080000
I0910 10:55:44.755648    3658 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0910 10:55:44.755886    3658 host.go:58] "ha-080000-m02" host status: Stopped
I0910 10:55:44.760449    3658 out.go:177] * Starting "ha-080000-m02" control-plane node in "ha-080000" cluster
I0910 10:55:44.764282    3658 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0910 10:55:44.764295    3658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0910 10:55:44.764301    3658 cache.go:56] Caching tarball of preloaded images
I0910 10:55:44.764376    3658 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0910 10:55:44.764381    3658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0910 10:55:44.764433    3658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/ha-080000/config.json ...
I0910 10:55:44.764864    3658 start.go:360] acquireMachinesLock for ha-080000-m02: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0910 10:55:44.764902    3658 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "ha-080000-m02"
I0910 10:55:44.764909    3658 start.go:96] Skipping create...Using existing machine configuration
I0910 10:55:44.764913    3658 fix.go:54] fixHost starting: m02
I0910 10:55:44.765003    3658 fix.go:112] recreateIfNeeded on ha-080000-m02: state=Stopped err=<nil>
W0910 10:55:44.765009    3658 fix.go:138] unexpected machine state, will restart: <nil>
I0910 10:55:44.768429    3658 out.go:177] * Restarting existing qemu2 VM for "ha-080000-m02" ...
I0910 10:55:44.769670    3658 qemu.go:418] Using hvf for hardware acceleration
I0910 10:55:44.769772    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e6:c3:f2:e2:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/disk.qcow2
I0910 10:55:44.772602    3658 main.go:141] libmachine: STDOUT: 
I0910 10:55:44.772680    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0910 10:55:44.772703    3658 fix.go:56] duration metric: took 7.789791ms for fixHost
I0910 10:55:44.772706    3658 start.go:83] releasing machines lock for "ha-080000-m02", held for 7.800334ms
W0910 10:55:44.772712    3658 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0910 10:55:44.772738    3658 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0910 10:55:44.772741    3658 start.go:729] Will try again in 5 seconds ...
I0910 10:55:49.774660    3658 start.go:360] acquireMachinesLock for ha-080000-m02: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0910 10:55:49.774804    3658 start.go:364] duration metric: took 111.375µs to acquireMachinesLock for "ha-080000-m02"
I0910 10:55:49.774862    3658 start.go:96] Skipping create...Using existing machine configuration
I0910 10:55:49.774869    3658 fix.go:54] fixHost starting: m02
I0910 10:55:49.775048    3658 fix.go:112] recreateIfNeeded on ha-080000-m02: state=Stopped err=<nil>
W0910 10:55:49.775055    3658 fix.go:138] unexpected machine state, will restart: <nil>
I0910 10:55:49.778844    3658 out.go:177] * Restarting existing qemu2 VM for "ha-080000-m02" ...
I0910 10:55:49.782855    3658 qemu.go:418] Using hvf for hardware acceleration
I0910 10:55:49.782912    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e6:c3:f2:e2:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m02/disk.qcow2
I0910 10:55:49.784855    3658 main.go:141] libmachine: STDOUT: 
I0910 10:55:49.784873    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0910 10:55:49.784896    3658 fix.go:56] duration metric: took 10.027667ms for fixHost
I0910 10:55:49.784900    3658 start.go:83] releasing machines lock for "ha-080000-m02", held for 10.07075ms
W0910 10:55:49.784947    3658 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0910 10:55:49.788850    3658 out.go:201] 
W0910 10:55:49.792931    3658 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0910 10:55:49.792936    3658 out.go:270] * 
* 
W0910 10:55:49.794645    3658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0910 10:55:49.798844    3658 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-080000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr
E0910 10:57:07.521036    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr: exit status 7 (1m19.470207s)

                                                
                                                
-- stdout --
	ha-080000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 10:55:49.837066    3664 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:55:49.837235    3664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:55:49.837238    3664 out.go:358] Setting ErrFile to fd 2...
	I0910 10:55:49.837241    3664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:55:49.837372    3664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:55:49.837493    3664 out.go:352] Setting JSON to false
	I0910 10:55:49.837510    3664 mustload.go:65] Loading cluster: ha-080000
	I0910 10:55:49.837565    3664 notify.go:220] Checking for updates...
	I0910 10:55:49.837723    3664 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:55:49.837730    3664 status.go:255] checking status of ha-080000 ...
	I0910 10:55:49.838429    3664 status.go:330] ha-080000 host status = "Running" (err=<nil>)
	I0910 10:55:49.838439    3664 host.go:66] Checking if "ha-080000" exists ...
	I0910 10:55:49.838545    3664 host.go:66] Checking if "ha-080000" exists ...
	I0910 10:55:49.838650    3664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 10:55:49.838658    3664 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/id_rsa Username:docker}
	W0910 10:55:49.838840    3664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0910 10:55:49.838857    3664 retry.go:31] will retry after 276.753955ms: dial tcp 192.168.105.5:22: connect: host is down
	W0910 10:55:50.117800    3664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0910 10:55:50.117824    3664 retry.go:31] will retry after 339.047015ms: dial tcp 192.168.105.5:22: connect: host is down
	W0910 10:55:50.459019    3664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0910 10:55:50.459038    3664 retry.go:31] will retry after 327.312579ms: dial tcp 192.168.105.5:22: connect: host is down
	W0910 10:55:50.788501    3664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0910 10:55:50.788521    3664 retry.go:31] will retry after 724.260485ms: dial tcp 192.168.105.5:22: connect: host is down
	W0910 10:56:17.429558    3664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0910 10:56:17.429629    3664 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0910 10:56:17.429641    3664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0910 10:56:17.429645    3664 status.go:257] ha-080000 status: &{Name:ha-080000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 10:56:17.429660    3664 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0910 10:56:17.429667    3664 status.go:255] checking status of ha-080000-m02 ...
	I0910 10:56:17.429856    3664 status.go:330] ha-080000-m02 host status = "Stopped" (err=<nil>)
	I0910 10:56:17.429861    3664 status.go:343] host is not running, skipping remaining checks
	I0910 10:56:17.429863    3664 status.go:257] ha-080000-m02 status: &{Name:ha-080000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 10:56:17.429867    3664 status.go:255] checking status of ha-080000-m03 ...
	I0910 10:56:17.430637    3664 status.go:330] ha-080000-m03 host status = "Running" (err=<nil>)
	I0910 10:56:17.430645    3664 host.go:66] Checking if "ha-080000-m03" exists ...
	I0910 10:56:17.430754    3664 host.go:66] Checking if "ha-080000-m03" exists ...
	I0910 10:56:17.430875    3664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 10:56:17.430881    3664 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m03/id_rsa Username:docker}
	W0910 10:56:43.350342    3664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0910 10:56:43.350400    3664 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0910 10:56:43.350407    3664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0910 10:56:43.350417    3664 status.go:257] ha-080000-m03 status: &{Name:ha-080000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 10:56:43.350425    3664 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0910 10:56:43.350429    3664 status.go:255] checking status of ha-080000-m04 ...
	I0910 10:56:43.351073    3664 status.go:330] ha-080000-m04 host status = "Running" (err=<nil>)
	I0910 10:56:43.351081    3664 host.go:66] Checking if "ha-080000-m04" exists ...
	I0910 10:56:43.351192    3664 host.go:66] Checking if "ha-080000-m04" exists ...
	I0910 10:56:43.351309    3664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 10:56:43.351314    3664 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000-m04/id_rsa Username:docker}
	W0910 10:57:09.270855    3664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0910 10:57:09.270896    3664 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0910 10:57:09.270902    3664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0910 10:57:09.270908    3664 status.go:257] ha-080000-m04 status: &{Name:ha-080000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0910 10:57:09.270916    3664 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
E0910 10:57:20.672215    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 3 (25.956364208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 10:57:35.226974    3715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0910 10:57:35.226981    3715 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (110.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-080000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-080000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-080000 -v=7 --alsologtostderr: (2m10.911518791s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-080000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-080000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226943541s)

                                                
                                                
-- stdout --
	* [ha-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-080000" primary control-plane node in "ha-080000" cluster
	* Restarting existing qemu2 VM for "ha-080000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-080000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:00:13.991534    4164 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:00:13.991765    4164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:00:13.991769    4164 out.go:358] Setting ErrFile to fd 2...
	I0910 11:00:13.991772    4164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:00:13.991962    4164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:00:13.993194    4164 out.go:352] Setting JSON to false
	I0910 11:00:14.012980    4164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3578,"bootTime":1725987636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:00:14.013065    4164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:00:14.017554    4164 out.go:177] * [ha-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:00:14.024524    4164 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:00:14.024551    4164 notify.go:220] Checking for updates...
	I0910 11:00:14.031427    4164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:00:14.034455    4164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:00:14.037510    4164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:00:14.040461    4164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:00:14.043455    4164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:00:14.046838    4164 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:00:14.046888    4164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:00:14.051352    4164 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:00:14.058614    4164 start.go:297] selected driver: qemu2
	I0910 11:00:14.058623    4164 start.go:901] validating driver "qemu2" against &{Name:ha-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-080000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:00:14.058706    4164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:00:14.061528    4164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:00:14.061567    4164 cni.go:84] Creating CNI manager for ""
	I0910 11:00:14.061575    4164 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0910 11:00:14.061617    4164 start.go:340] cluster config:
	{Name:ha-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-080000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:00:14.065834    4164 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:00:14.074411    4164 out.go:177] * Starting "ha-080000" primary control-plane node in "ha-080000" cluster
	I0910 11:00:14.078354    4164 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:00:14.078372    4164 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:00:14.078381    4164 cache.go:56] Caching tarball of preloaded images
	I0910 11:00:14.078447    4164 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:00:14.078454    4164 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:00:14.078526    4164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/ha-080000/config.json ...
	I0910 11:00:14.079015    4164 start.go:360] acquireMachinesLock for ha-080000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:00:14.079054    4164 start.go:364] duration metric: took 32.166µs to acquireMachinesLock for "ha-080000"
	I0910 11:00:14.079063    4164 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:00:14.079068    4164 fix.go:54] fixHost starting: 
	I0910 11:00:14.079192    4164 fix.go:112] recreateIfNeeded on ha-080000: state=Stopped err=<nil>
	W0910 11:00:14.079201    4164 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:00:14.083462    4164 out.go:177] * Restarting existing qemu2 VM for "ha-080000" ...
	I0910 11:00:14.090499    4164 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:00:14.090537    4164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:60:aa:a3:1d:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/disk.qcow2
	I0910 11:00:14.092620    4164 main.go:141] libmachine: STDOUT: 
	I0910 11:00:14.092640    4164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:00:14.092669    4164 fix.go:56] duration metric: took 13.601833ms for fixHost
	I0910 11:00:14.092675    4164 start.go:83] releasing machines lock for "ha-080000", held for 13.616958ms
	W0910 11:00:14.092683    4164 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:00:14.092730    4164 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:00:14.092735    4164 start.go:729] Will try again in 5 seconds ...
	I0910 11:00:19.094806    4164 start.go:360] acquireMachinesLock for ha-080000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:00:19.095373    4164 start.go:364] duration metric: took 414.791µs to acquireMachinesLock for "ha-080000"
	I0910 11:00:19.095506    4164 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:00:19.095523    4164 fix.go:54] fixHost starting: 
	I0910 11:00:19.096229    4164 fix.go:112] recreateIfNeeded on ha-080000: state=Stopped err=<nil>
	W0910 11:00:19.096251    4164 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:00:19.104793    4164 out.go:177] * Restarting existing qemu2 VM for "ha-080000" ...
	I0910 11:00:19.107761    4164 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:00:19.107965    4164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:60:aa:a3:1d:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/disk.qcow2
	I0910 11:00:19.116026    4164 main.go:141] libmachine: STDOUT: 
	I0910 11:00:19.116077    4164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:00:19.116150    4164 fix.go:56] duration metric: took 20.628958ms for fixHost
	I0910 11:00:19.116165    4164 start.go:83] releasing machines lock for "ha-080000", held for 20.768542ms
	W0910 11:00:19.116358    4164 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:00:19.124583    4164 out.go:201] 
	W0910 11:00:19.128789    4164 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:00:19.128809    4164 out.go:270] * 
	* 
	W0910 11:00:19.130919    4164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:00:19.142771    4164 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-080000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-080000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 7 (32.300875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-080000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.250625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-080000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-080000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:00:19.278231    4178 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:00:19.278472    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:00:19.278475    4178 out.go:358] Setting ErrFile to fd 2...
	I0910 11:00:19.278478    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:00:19.278623    4178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:00:19.278861    4178 mustload.go:65] Loading cluster: ha-080000
	I0910 11:00:19.279090    4178 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0910 11:00:19.279401    4178 out.go:270] ! The control-plane node ha-080000 host is not running (will try others): state=Stopped
	! The control-plane node ha-080000 host is not running (will try others): state=Stopped
	W0910 11:00:19.279503    4178 out.go:270] ! The control-plane node ha-080000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-080000-m02 host is not running (will try others): state=Stopped
	I0910 11:00:19.282708    4178 out.go:177] * The control-plane node ha-080000-m03 host is not running: state=Stopped
	I0910 11:00:19.285696    4178 out.go:177]   To start a cluster, run: "minikube start -p ha-080000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-080000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr: exit status 7 (30.238083ms)

                                                
                                                
-- stdout --
	ha-080000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:00:19.317880    4180 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:00:19.318021    4180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:00:19.318024    4180 out.go:358] Setting ErrFile to fd 2...
	I0910 11:00:19.318027    4180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:00:19.318160    4180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:00:19.318275    4180 out.go:352] Setting JSON to false
	I0910 11:00:19.318285    4180 mustload.go:65] Loading cluster: ha-080000
	I0910 11:00:19.318351    4180 notify.go:220] Checking for updates...
	I0910 11:00:19.318518    4180 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:00:19.318524    4180 status.go:255] checking status of ha-080000 ...
	I0910 11:00:19.318733    4180 status.go:330] ha-080000 host status = "Stopped" (err=<nil>)
	I0910 11:00:19.318738    4180 status.go:343] host is not running, skipping remaining checks
	I0910 11:00:19.318740    4180 status.go:257] ha-080000 status: &{Name:ha-080000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 11:00:19.318749    4180 status.go:255] checking status of ha-080000-m02 ...
	I0910 11:00:19.318839    4180 status.go:330] ha-080000-m02 host status = "Stopped" (err=<nil>)
	I0910 11:00:19.318841    4180 status.go:343] host is not running, skipping remaining checks
	I0910 11:00:19.318843    4180 status.go:257] ha-080000-m02 status: &{Name:ha-080000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 11:00:19.318848    4180 status.go:255] checking status of ha-080000-m03 ...
	I0910 11:00:19.318933    4180 status.go:330] ha-080000-m03 host status = "Stopped" (err=<nil>)
	I0910 11:00:19.318936    4180 status.go:343] host is not running, skipping remaining checks
	I0910 11:00:19.318941    4180 status.go:257] ha-080000-m03 status: &{Name:ha-080000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 11:00:19.318945    4180 status.go:255] checking status of ha-080000-m04 ...
	I0910 11:00:19.319039    4180 status.go:330] ha-080000-m04 host status = "Stopped" (err=<nil>)
	I0910 11:00:19.319042    4180 status.go:343] host is not running, skipping remaining checks
	I0910 11:00:19.319044    4180 status.go:257] ha-080000-m04 status: &{Name:ha-080000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 7 (30.3555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-080000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-080000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-080000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-080000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 7 (29.343792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (103.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-080000 stop -v=7 --alsologtostderr: (1m43.808341458s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr: exit status 7 (65.05325ms)

                                                
                                                
-- stdout --
	ha-080000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:02:03.295466    4267 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:02:03.295686    4267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:03.295691    4267 out.go:358] Setting ErrFile to fd 2...
	I0910 11:02:03.295694    4267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:03.295855    4267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:02:03.296011    4267 out.go:352] Setting JSON to false
	I0910 11:02:03.296025    4267 mustload.go:65] Loading cluster: ha-080000
	I0910 11:02:03.296057    4267 notify.go:220] Checking for updates...
	I0910 11:02:03.296323    4267 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:02:03.296332    4267 status.go:255] checking status of ha-080000 ...
	I0910 11:02:03.296617    4267 status.go:330] ha-080000 host status = "Stopped" (err=<nil>)
	I0910 11:02:03.296623    4267 status.go:343] host is not running, skipping remaining checks
	I0910 11:02:03.296626    4267 status.go:257] ha-080000 status: &{Name:ha-080000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 11:02:03.296640    4267 status.go:255] checking status of ha-080000-m02 ...
	I0910 11:02:03.296759    4267 status.go:330] ha-080000-m02 host status = "Stopped" (err=<nil>)
	I0910 11:02:03.296763    4267 status.go:343] host is not running, skipping remaining checks
	I0910 11:02:03.296766    4267 status.go:257] ha-080000-m02 status: &{Name:ha-080000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 11:02:03.296771    4267 status.go:255] checking status of ha-080000-m03 ...
	I0910 11:02:03.296901    4267 status.go:330] ha-080000-m03 host status = "Stopped" (err=<nil>)
	I0910 11:02:03.296905    4267 status.go:343] host is not running, skipping remaining checks
	I0910 11:02:03.296908    4267 status.go:257] ha-080000-m03 status: &{Name:ha-080000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 11:02:03.296912    4267 status.go:255] checking status of ha-080000-m04 ...
	I0910 11:02:03.297054    4267 status.go:330] ha-080000-m04 host status = "Stopped" (err=<nil>)
	I0910 11:02:03.297060    4267 status.go:343] host is not running, skipping remaining checks
	I0910 11:02:03.297063    4267 status.go:257] ha-080000-m04 status: &{Name:ha-080000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr": ha-080000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr": ha-080000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr": ha-080000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 7 (33.506208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (103.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-080000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
E0910 11:02:07.515222    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-080000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.178871042s)

                                                
                                                
-- stdout --
	* [ha-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-080000" primary control-plane node in "ha-080000" cluster
	* Restarting existing qemu2 VM for "ha-080000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-080000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:02:03.359343    4271 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:02:03.359476    4271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:03.359479    4271 out.go:358] Setting ErrFile to fd 2...
	I0910 11:02:03.359481    4271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:03.359603    4271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:02:03.360609    4271 out.go:352] Setting JSON to false
	I0910 11:02:03.376610    4271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3687,"bootTime":1725987636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:02:03.376677    4271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:02:03.381173    4271 out.go:177] * [ha-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:02:03.388034    4271 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:02:03.388091    4271 notify.go:220] Checking for updates...
	I0910 11:02:03.394044    4271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:02:03.397060    4271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:02:03.398487    4271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:02:03.402036    4271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:02:03.405045    4271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:02:03.408395    4271 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:02:03.408677    4271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:02:03.413031    4271 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:02:03.420015    4271 start.go:297] selected driver: qemu2
	I0910 11:02:03.420021    4271 start.go:901] validating driver "qemu2" against &{Name:ha-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-080000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:02:03.420134    4271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:02:03.422240    4271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:02:03.422263    4271 cni.go:84] Creating CNI manager for ""
	I0910 11:02:03.422268    4271 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0910 11:02:03.422309    4271 start.go:340] cluster config:
	{Name:ha-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-080000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:02:03.425655    4271 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:02:03.434104    4271 out.go:177] * Starting "ha-080000" primary control-plane node in "ha-080000" cluster
	I0910 11:02:03.438012    4271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:02:03.438027    4271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:02:03.438054    4271 cache.go:56] Caching tarball of preloaded images
	I0910 11:02:03.438112    4271 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:02:03.438117    4271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:02:03.438202    4271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/ha-080000/config.json ...
	I0910 11:02:03.438663    4271 start.go:360] acquireMachinesLock for ha-080000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:02:03.438697    4271 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "ha-080000"
	I0910 11:02:03.438706    4271 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:02:03.438712    4271 fix.go:54] fixHost starting: 
	I0910 11:02:03.438835    4271 fix.go:112] recreateIfNeeded on ha-080000: state=Stopped err=<nil>
	W0910 11:02:03.438844    4271 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:02:03.443067    4271 out.go:177] * Restarting existing qemu2 VM for "ha-080000" ...
	I0910 11:02:03.451027    4271 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:02:03.451061    4271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:60:aa:a3:1d:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/disk.qcow2
	I0910 11:02:03.453130    4271 main.go:141] libmachine: STDOUT: 
	I0910 11:02:03.453149    4271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:02:03.453177    4271 fix.go:56] duration metric: took 14.466791ms for fixHost
	I0910 11:02:03.453190    4271 start.go:83] releasing machines lock for "ha-080000", held for 14.479875ms
	W0910 11:02:03.453198    4271 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:02:03.453232    4271 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:02:03.453237    4271 start.go:729] Will try again in 5 seconds ...
	I0910 11:02:08.455328    4271 start.go:360] acquireMachinesLock for ha-080000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:02:08.455961    4271 start.go:364] duration metric: took 461.334µs to acquireMachinesLock for "ha-080000"
	I0910 11:02:08.456133    4271 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:02:08.456156    4271 fix.go:54] fixHost starting: 
	I0910 11:02:08.456898    4271 fix.go:112] recreateIfNeeded on ha-080000: state=Stopped err=<nil>
	W0910 11:02:08.456929    4271 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:02:08.461417    4271 out.go:177] * Restarting existing qemu2 VM for "ha-080000" ...
	I0910 11:02:08.467299    4271 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:02:08.467564    4271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:60:aa:a3:1d:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/ha-080000/disk.qcow2
	I0910 11:02:08.477177    4271 main.go:141] libmachine: STDOUT: 
	I0910 11:02:08.477286    4271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:02:08.477421    4271 fix.go:56] duration metric: took 21.24375ms for fixHost
	I0910 11:02:08.477441    4271 start.go:83] releasing machines lock for "ha-080000", held for 21.419875ms
	W0910 11:02:08.477662    4271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:02:08.485421    4271 out.go:201] 
	W0910 11:02:08.489409    4271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:02:08.489456    4271 out.go:270] * 
	* 
	W0910 11:02:08.492188    4271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:02:08.502438    4271 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-080000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 7 (69.202875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-080000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-080000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-080000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-080000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 7 (29.39ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-080000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-080000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.02425ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-080000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-080000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:02:08.691312    4288 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:02:08.691468    4288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:08.691471    4288 out.go:358] Setting ErrFile to fd 2...
	I0910 11:02:08.691473    4288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:08.691604    4288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:02:08.691831    4288 mustload.go:65] Loading cluster: ha-080000
	I0910 11:02:08.692035    4288 config.go:182] Loaded profile config "ha-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0910 11:02:08.692339    4288 out.go:270] ! The control-plane node ha-080000 host is not running (will try others): state=Stopped
	! The control-plane node ha-080000 host is not running (will try others): state=Stopped
	W0910 11:02:08.692447    4288 out.go:270] ! The control-plane node ha-080000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-080000-m02 host is not running (will try others): state=Stopped
	I0910 11:02:08.696540    4288 out.go:177] * The control-plane node ha-080000-m03 host is not running: state=Stopped
	I0910 11:02:08.700487    4288 out.go:177]   To start a cluster, run: "minikube start -p ha-080000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-080000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-080000 -n ha-080000: exit status 7 (29.208625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-487000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-487000 --driver=qemu2 : exit status 80 (10.127801708s)

                                                
                                                
-- stdout --
	* [image-487000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-487000" primary control-plane node in "image-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-487000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-487000 -n image-487000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-487000 -n image-487000: exit status 7 (66.307625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-487000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.19s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-966000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0910 11:02:20.665328    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-966000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.838666458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb435634-e689-4638-a3f5-b990432cf98c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-966000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d24d7c3-de44-4ceb-b9ea-af9c278a6c92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"45a4fbdf-7d85-413c-8c43-a5abe1f38af5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig"}}
	{"specversion":"1.0","id":"13e154e2-f2fc-4ec4-827c-67a1069ea661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d1032a09-5f46-4465-a936-a3b31eed950d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9fc564cc-21e0-4c71-a2de-73a873ce803e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube"}}
	{"specversion":"1.0","id":"db700ab1-009e-4e39-905d-8fd7c8d3eb5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"65fa4454-21e9-45ec-baa5-108442bb6929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"501a4e0f-3576-4876-bf5c-4a7d45b3a791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9523072f-b9c0-420d-aadc-ab198b12e737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-966000\" primary control-plane node in \"json-output-966000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7b35cfa-5dc0-44d4-b73d-11754e1b4058","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"7717c412-adb6-469b-b467-ba9f9f005bfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-966000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d05bddc-1a22-4aba-8673-863632b1c550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"3faf884e-92dd-4fd6-abad-56599380b7e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a1b28c1b-65fc-4acc-9adf-1ce10f1cb4cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-966000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fcf9eaf2-4870-4c4c-8354-b410bebd6c3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"54d15f01-5261-43f5-b6ee-3d75aaed17fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-966000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-966000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-966000 --output=json --user=testUser: exit status 83 (76.26425ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"037331a6-b2c8-4010-9335-fab76a89cd07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-966000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"cc2b5d16-34db-46b1-ae00-0743ce2a3439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-966000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-966000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-966000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-966000 --output=json --user=testUser: exit status 83 (42.711541ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-966000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-966000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-966000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-966000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-645000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-645000 --driver=qemu2 : exit status 80 (9.856877625s)

                                                
                                                
-- stdout --
	* [first-645000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-645000" primary control-plane node in "first-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-645000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-10 11:02:43.290826 -0700 PDT m=+2062.739583960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-647000 -n second-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-647000 -n second-647000: exit status 85 (83.657292ms)

                                                
                                                
-- stdout --
	* Profile "second-647000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-647000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-647000" host is not running, skipping log retrieval (state="* Profile \"second-647000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-647000\"")
helpers_test.go:175: Cleaning up "second-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-647000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-10 11:02:43.484566 -0700 PDT m=+2062.933328418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-645000 -n first-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-645000 -n first-645000: exit status 7 (30.091167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-645000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-645000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-645000
--- FAIL: TestMinikubeProfile (10.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-490000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-490000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.954154541s)

                                                
                                                
-- stdout --
	* [mount-start-1-490000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-490000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-490000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-490000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-490000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-490000 -n mount-start-1-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-490000 -n mount-start-1-490000: exit status 7 (71.376917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-490000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.784366333s)

                                                
                                                
-- stdout --
	* [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-416000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:02:53.831221    4458 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:02:53.831370    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:53.831373    4458 out.go:358] Setting ErrFile to fd 2...
	I0910 11:02:53.831376    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:02:53.831505    4458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:02:53.832575    4458 out.go:352] Setting JSON to false
	I0910 11:02:53.849091    4458 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3737,"bootTime":1725987636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:02:53.849161    4458 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:02:53.854506    4458 out.go:177] * [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:02:53.862456    4458 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:02:53.862501    4458 notify.go:220] Checking for updates...
	I0910 11:02:53.873400    4458 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:02:53.876425    4458 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:02:53.877950    4458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:02:53.881440    4458 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:02:53.884404    4458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:02:53.887579    4458 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:02:53.891387    4458 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:02:53.898483    4458 start.go:297] selected driver: qemu2
	I0910 11:02:53.898491    4458 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:02:53.898503    4458 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:02:53.900829    4458 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:02:53.903393    4458 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:02:53.906507    4458 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:02:53.906559    4458 cni.go:84] Creating CNI manager for ""
	I0910 11:02:53.906567    4458 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0910 11:02:53.906571    4458 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 11:02:53.906598    4458 start.go:340] cluster config:
	{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:02:53.910257    4458 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:02:53.917407    4458 out.go:177] * Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	I0910 11:02:53.921412    4458 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:02:53.921426    4458 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:02:53.921431    4458 cache.go:56] Caching tarball of preloaded images
	I0910 11:02:53.921488    4458 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:02:53.921493    4458 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:02:53.921679    4458 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/multinode-416000/config.json ...
	I0910 11:02:53.921692    4458 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/multinode-416000/config.json: {Name:mk03c9a8955c8d66a25c92d0ea6b8b0b7f4f5328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:02:53.921922    4458 start.go:360] acquireMachinesLock for multinode-416000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:02:53.921958    4458 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "multinode-416000"
	I0910 11:02:53.921971    4458 start.go:93] Provisioning new machine with config: &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:02:53.922001    4458 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:02:53.931468    4458 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:02:53.948835    4458 start.go:159] libmachine.API.Create for "multinode-416000" (driver="qemu2")
	I0910 11:02:53.948863    4458 client.go:168] LocalClient.Create starting
	I0910 11:02:53.948922    4458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:02:53.948957    4458 main.go:141] libmachine: Decoding PEM data...
	I0910 11:02:53.948969    4458 main.go:141] libmachine: Parsing certificate...
	I0910 11:02:53.949006    4458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:02:53.949028    4458 main.go:141] libmachine: Decoding PEM data...
	I0910 11:02:53.949037    4458 main.go:141] libmachine: Parsing certificate...
	I0910 11:02:53.949389    4458 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:02:54.108316    4458 main.go:141] libmachine: Creating SSH key...
	I0910 11:02:54.145874    4458 main.go:141] libmachine: Creating Disk image...
	I0910 11:02:54.145879    4458 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:02:54.146075    4458 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:02:54.155199    4458 main.go:141] libmachine: STDOUT: 
	I0910 11:02:54.155216    4458 main.go:141] libmachine: STDERR: 
	I0910 11:02:54.155269    4458 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2 +20000M
	I0910 11:02:54.163004    4458 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:02:54.163018    4458 main.go:141] libmachine: STDERR: 
	I0910 11:02:54.163026    4458 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:02:54.163032    4458 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:02:54.163044    4458 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:02:54.163070    4458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:57:28:54:f0:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:02:54.164674    4458 main.go:141] libmachine: STDOUT: 
	I0910 11:02:54.164689    4458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:02:54.164708    4458 client.go:171] duration metric: took 215.845709ms to LocalClient.Create
	I0910 11:02:56.166851    4458 start.go:128] duration metric: took 2.244879208s to createHost
	I0910 11:02:56.166936    4458 start.go:83] releasing machines lock for "multinode-416000", held for 2.245023292s
	W0910 11:02:56.166986    4458 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:02:56.178173    4458 out.go:177] * Deleting "multinode-416000" in qemu2 ...
	W0910 11:02:56.217343    4458 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:02:56.217367    4458 start.go:729] Will try again in 5 seconds ...
	I0910 11:03:01.219523    4458 start.go:360] acquireMachinesLock for multinode-416000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:03:01.220142    4458 start.go:364] duration metric: took 470.083µs to acquireMachinesLock for "multinode-416000"
	I0910 11:03:01.220299    4458 start.go:93] Provisioning new machine with config: &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:03:01.220547    4458 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:03:01.234140    4458 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:03:01.285649    4458 start.go:159] libmachine.API.Create for "multinode-416000" (driver="qemu2")
	I0910 11:03:01.285703    4458 client.go:168] LocalClient.Create starting
	I0910 11:03:01.285806    4458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:03:01.285861    4458 main.go:141] libmachine: Decoding PEM data...
	I0910 11:03:01.285878    4458 main.go:141] libmachine: Parsing certificate...
	I0910 11:03:01.285948    4458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:03:01.285995    4458 main.go:141] libmachine: Decoding PEM data...
	I0910 11:03:01.286006    4458 main.go:141] libmachine: Parsing certificate...
	I0910 11:03:01.286907    4458 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:03:01.457380    4458 main.go:141] libmachine: Creating SSH key...
	I0910 11:03:01.513527    4458 main.go:141] libmachine: Creating Disk image...
	I0910 11:03:01.513531    4458 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:03:01.513738    4458 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:03:01.523220    4458 main.go:141] libmachine: STDOUT: 
	I0910 11:03:01.523239    4458 main.go:141] libmachine: STDERR: 
	I0910 11:03:01.523287    4458 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2 +20000M
	I0910 11:03:01.531099    4458 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:03:01.531120    4458 main.go:141] libmachine: STDERR: 
	I0910 11:03:01.531128    4458 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:03:01.531133    4458 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:03:01.531139    4458 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:03:01.531168    4458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a5:c4:a5:46:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:03:01.532786    4458 main.go:141] libmachine: STDOUT: 
	I0910 11:03:01.532801    4458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:03:01.532813    4458 client.go:171] duration metric: took 247.112334ms to LocalClient.Create
	I0910 11:03:03.534950    4458 start.go:128] duration metric: took 2.314430333s to createHost
	I0910 11:03:03.535034    4458 start.go:83] releasing machines lock for "multinode-416000", held for 2.314918917s
	W0910 11:03:03.535510    4458 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:03:03.551131    4458 out.go:201] 
	W0910 11:03:03.555282    4458 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:03:03.555334    4458 out.go:270] * 
	* 
	W0910 11:03:03.558000    4458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:03:03.573139    4458 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-416000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (68.535083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (84.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.260583ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-416000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- rollout status deployment/busybox: exit status 1 (58.606959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.358917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.497167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.353959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.284625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.13125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.111333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.063791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0910 11:03:30.608658    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.720959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.021584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.720208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.498958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.097416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.590041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.138875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.568167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (84.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.703292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (31.110292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-416000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-416000 -v 3 --alsologtostderr: exit status 83 (42.565375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:27.925454    4591 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:27.925615    4591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:27.925619    4591 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:27.925621    4591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:27.925755    4591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:27.925999    4591 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:27.926175    4591 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:27.929854    4591 out.go:177] * The control-plane node multinode-416000 host is not running: state=Stopped
	I0910 11:04:27.933698    4591 out.go:177]   To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-416000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.336458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-416000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-416000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.693042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-416000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-416000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-416000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.479208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-416000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-416000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-416000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-416000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.921708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --output json --alsologtostderr: exit status 7 (29.892959ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-416000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:28.132992    4603 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:28.133155    4603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.133158    4603 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:28.133160    4603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.133286    4603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:28.133411    4603 out.go:352] Setting JSON to true
	I0910 11:04:28.133422    4603 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:28.133482    4603 notify.go:220] Checking for updates...
	I0910 11:04:28.133641    4603 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:28.133647    4603 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:28.133851    4603 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:28.133855    4603 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:28.133857    4603 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-416000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.906291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 node stop m03: exit status 85 (47.391541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-416000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status: exit status 7 (29.88725ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr: exit status 7 (29.615167ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:28.270616    4611 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:28.270781    4611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.270784    4611 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:28.270786    4611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.270908    4611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:28.271036    4611 out.go:352] Setting JSON to false
	I0910 11:04:28.271046    4611 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:28.271123    4611 notify.go:220] Checking for updates...
	I0910 11:04:28.271247    4611 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:28.271252    4611 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:28.271446    4611 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:28.271450    4611 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:28.271453    4611 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr": multinode-416000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.695167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.276709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:28.330680    4615 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:28.330933    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.330937    4615 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:28.330939    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.331060    4615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:28.331295    4615 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:28.331484    4615 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:28.335757    4615 out.go:201] 
	W0910 11:04:28.338712    4615 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0910 11:04:28.338716    4615 out.go:270] * 
	* 
	W0910 11:04:28.340340    4615 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:04:28.343689    4615 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0910 11:04:28.330680    4615 out.go:345] Setting OutFile to fd 1 ...
I0910 11:04:28.330933    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 11:04:28.330937    4615 out.go:358] Setting ErrFile to fd 2...
I0910 11:04:28.330939    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 11:04:28.331060    4615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
I0910 11:04:28.331295    4615 mustload.go:65] Loading cluster: multinode-416000
I0910 11:04:28.331484    4615 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 11:04:28.335757    4615 out.go:201] 
W0910 11:04:28.338712    4615 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0910 11:04:28.338716    4615 out.go:270] * 
* 
W0910 11:04:28.340340    4615 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0910 11:04:28.343689    4615 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-416000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (30.728542ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:28.377759    4617 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:28.377918    4617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.377922    4617 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:28.377924    4617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:28.378070    4617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:28.378183    4617 out.go:352] Setting JSON to false
	I0910 11:04:28.378194    4617 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:28.378248    4617 notify.go:220] Checking for updates...
	I0910 11:04:28.378403    4617 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:28.378413    4617 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:28.378632    4617 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:28.378637    4617 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:28.378639    4617 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (70.518291ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:29.759039    4620 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:29.759216    4620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:29.759220    4620 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:29.759224    4620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:29.759381    4620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:29.759546    4620 out.go:352] Setting JSON to false
	I0910 11:04:29.759561    4620 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:29.759602    4620 notify.go:220] Checking for updates...
	I0910 11:04:29.759841    4620 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:29.759848    4620 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:29.760144    4620 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:29.760150    4620 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:29.760153    4620 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (73.723042ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:30.719683    4624 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:30.719893    4624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:30.719898    4624 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:30.719901    4624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:30.720091    4624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:30.720263    4624 out.go:352] Setting JSON to false
	I0910 11:04:30.720277    4624 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:30.720325    4624 notify.go:220] Checking for updates...
	I0910 11:04:30.720547    4624 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:30.720553    4624 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:30.720837    4624 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:30.720843    4624 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:30.720845    4624 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (72.240625ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:33.629693    4628 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:33.629917    4628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:33.629922    4628 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:33.629925    4628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:33.630109    4628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:33.630275    4628 out.go:352] Setting JSON to false
	I0910 11:04:33.630295    4628 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:33.630326    4628 notify.go:220] Checking for updates...
	I0910 11:04:33.630543    4628 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:33.630549    4628 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:33.630841    4628 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:33.630846    4628 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:33.630849    4628 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (75.071209ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:37.610590    4630 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:37.610973    4630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:37.610979    4630 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:37.610983    4630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:37.611229    4630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:37.611417    4630 out.go:352] Setting JSON to false
	I0910 11:04:37.611431    4630 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:37.611611    4630 notify.go:220] Checking for updates...
	I0910 11:04:37.612012    4630 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:37.612021    4630 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:37.612315    4630 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:37.612322    4630 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:37.612325    4630 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (73.557792ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:40.390683    4634 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:40.390882    4634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:40.390887    4634 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:40.390890    4634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:40.391062    4634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:40.391237    4634 out.go:352] Setting JSON to false
	I0910 11:04:40.391250    4634 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:40.391293    4634 notify.go:220] Checking for updates...
	I0910 11:04:40.391520    4634 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:40.391526    4634 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:40.391814    4634 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:40.391820    4634 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:40.391823    4634 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (73.456416ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:04:51.180460    4643 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:04:51.180696    4643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:51.180701    4643 out.go:358] Setting ErrFile to fd 2...
	I0910 11:04:51.180704    4643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:04:51.180903    4643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:04:51.181062    4643 out.go:352] Setting JSON to false
	I0910 11:04:51.181078    4643 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:04:51.181125    4643 notify.go:220] Checking for updates...
	I0910 11:04:51.181372    4643 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:04:51.181379    4643 status.go:255] checking status of multinode-416000 ...
	I0910 11:04:51.181670    4643 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:04:51.181675    4643 status.go:343] host is not running, skipping remaining checks
	I0910 11:04:51.181678    4643 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (71.63375ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:05:07.230103    4659 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:05:07.230322    4659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:07.230327    4659 out.go:358] Setting ErrFile to fd 2...
	I0910 11:05:07.230330    4659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:07.230524    4659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:05:07.230679    4659 out.go:352] Setting JSON to false
	I0910 11:05:07.230695    4659 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:05:07.230738    4659 notify.go:220] Checking for updates...
	I0910 11:05:07.230962    4659 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:05:07.230969    4659 status.go:255] checking status of multinode-416000 ...
	I0910 11:05:07.231260    4659 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:05:07.231266    4659 status.go:343] host is not running, skipping remaining checks
	I0910 11:05:07.231269    4659 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (34.280792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (38.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-416000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-416000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-416000: (3.583184s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.227444791s)

                                                
                                                
-- stdout --
	* [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:05:10.943952    4683 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:05:10.944176    4683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:10.944181    4683 out.go:358] Setting ErrFile to fd 2...
	I0910 11:05:10.944184    4683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:10.944359    4683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:05:10.945555    4683 out.go:352] Setting JSON to false
	I0910 11:05:10.965621    4683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3874,"bootTime":1725987636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:05:10.965700    4683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:05:10.970276    4683 out.go:177] * [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:05:10.978180    4683 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:05:10.978211    4683 notify.go:220] Checking for updates...
	I0910 11:05:10.985104    4683 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:05:10.988170    4683 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:05:10.991172    4683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:05:10.994108    4683 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:05:10.997110    4683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:05:11.000440    4683 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:05:11.000493    4683 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:05:11.005126    4683 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:05:11.012132    4683 start.go:297] selected driver: qemu2
	I0910 11:05:11.012139    4683 start.go:901] validating driver "qemu2" against &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:05:11.012202    4683 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:05:11.014786    4683 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:05:11.014826    4683 cni.go:84] Creating CNI manager for ""
	I0910 11:05:11.014832    4683 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 11:05:11.014871    4683 start.go:340] cluster config:
	{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-416000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:05:11.018808    4683 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:11.026207    4683 out.go:177] * Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	I0910 11:05:11.029997    4683 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:05:11.030013    4683 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:05:11.030021    4683 cache.go:56] Caching tarball of preloaded images
	I0910 11:05:11.030089    4683 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:05:11.030095    4683 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:05:11.030147    4683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/multinode-416000/config.json ...
	I0910 11:05:11.030644    4683 start.go:360] acquireMachinesLock for multinode-416000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:05:11.030680    4683 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "multinode-416000"
	I0910 11:05:11.030693    4683 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:05:11.030700    4683 fix.go:54] fixHost starting: 
	I0910 11:05:11.030819    4683 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0910 11:05:11.030827    4683 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:05:11.039121    4683 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0910 11:05:11.043096    4683 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:05:11.043135    4683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a5:c4:a5:46:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:05:11.045228    4683 main.go:141] libmachine: STDOUT: 
	I0910 11:05:11.045248    4683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:05:11.045283    4683 fix.go:56] duration metric: took 14.584084ms for fixHost
	I0910 11:05:11.045288    4683 start.go:83] releasing machines lock for "multinode-416000", held for 14.604458ms
	W0910 11:05:11.045295    4683 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:05:11.045341    4683 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:05:11.045346    4683 start.go:729] Will try again in 5 seconds ...
	I0910 11:05:16.047419    4683 start.go:360] acquireMachinesLock for multinode-416000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:05:16.047883    4683 start.go:364] duration metric: took 339.334µs to acquireMachinesLock for "multinode-416000"
	I0910 11:05:16.048035    4683 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:05:16.048059    4683 fix.go:54] fixHost starting: 
	I0910 11:05:16.048839    4683 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0910 11:05:16.048866    4683 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:05:16.054550    4683 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0910 11:05:16.062482    4683 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:05:16.062883    4683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a5:c4:a5:46:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:05:16.071965    4683 main.go:141] libmachine: STDOUT: 
	I0910 11:05:16.072046    4683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:05:16.072168    4683 fix.go:56] duration metric: took 24.108833ms for fixHost
	I0910 11:05:16.072194    4683 start.go:83] releasing machines lock for "multinode-416000", held for 24.281667ms
	W0910 11:05:16.072452    4683 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:05:16.080498    4683 out.go:201] 
	W0910 11:05:16.084560    4683 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:05:16.084650    4683 out.go:270] * 
	* 
	W0910 11:05:16.087186    4683 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:05:16.094473    4683 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-416000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-416000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (32.600291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 node delete m03: exit status 83 (40.895125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-416000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr: exit status 7 (29.506208ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:05:16.277694    4701 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:05:16.277860    4701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:16.277864    4701 out.go:358] Setting ErrFile to fd 2...
	I0910 11:05:16.277866    4701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:16.277995    4701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:05:16.278106    4701 out.go:352] Setting JSON to false
	I0910 11:05:16.278117    4701 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:05:16.278168    4701 notify.go:220] Checking for updates...
	I0910 11:05:16.278298    4701 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:05:16.278303    4701 status.go:255] checking status of multinode-416000 ...
	I0910 11:05:16.278535    4701 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:05:16.278539    4701 status.go:343] host is not running, skipping remaining checks
	I0910 11:05:16.278542    4701 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.768583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-416000 stop: (3.700277709s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status: exit status 7 (68.775917ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr: exit status 7 (33.186459ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:05:20.110234    4727 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:05:20.110392    4727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:20.110395    4727 out.go:358] Setting ErrFile to fd 2...
	I0910 11:05:20.110398    4727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:20.110540    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:05:20.110659    4727 out.go:352] Setting JSON to false
	I0910 11:05:20.110670    4727 mustload.go:65] Loading cluster: multinode-416000
	I0910 11:05:20.110726    4727 notify.go:220] Checking for updates...
	I0910 11:05:20.110871    4727 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:05:20.110878    4727 status.go:255] checking status of multinode-416000 ...
	I0910 11:05:20.111079    4727 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0910 11:05:20.111084    4727 status.go:343] host is not running, skipping remaining checks
	I0910 11:05:20.111086    4727 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr": multinode-416000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr": multinode-416000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.035125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180111s)

                                                
                                                
-- stdout --
	* [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:05:20.169087    4731 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:05:20.169198    4731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:20.169203    4731 out.go:358] Setting ErrFile to fd 2...
	I0910 11:05:20.169205    4731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:20.169335    4731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:05:20.170316    4731 out.go:352] Setting JSON to false
	I0910 11:05:20.186204    4731 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3884,"bootTime":1725987636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:05:20.186268    4731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:05:20.191645    4731 out.go:177] * [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:05:20.198588    4731 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:05:20.198630    4731 notify.go:220] Checking for updates...
	I0910 11:05:20.204520    4731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:05:20.207499    4731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:05:20.210500    4731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:05:20.213563    4731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:05:20.216549    4731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:05:20.219865    4731 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:05:20.220113    4731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:05:20.224503    4731 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:05:20.231526    4731 start.go:297] selected driver: qemu2
	I0910 11:05:20.231534    4731 start.go:901] validating driver "qemu2" against &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:05:20.231607    4731 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:05:20.233750    4731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:05:20.233778    4731 cni.go:84] Creating CNI manager for ""
	I0910 11:05:20.233782    4731 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 11:05:20.233846    4731 start.go:340] cluster config:
	{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-416000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:05:20.237338    4731 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:20.244528    4731 out.go:177] * Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	I0910 11:05:20.248466    4731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:05:20.248480    4731 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:05:20.248488    4731 cache.go:56] Caching tarball of preloaded images
	I0910 11:05:20.248555    4731 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:05:20.248560    4731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:05:20.248614    4731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/multinode-416000/config.json ...
	I0910 11:05:20.249096    4731 start.go:360] acquireMachinesLock for multinode-416000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:05:20.249130    4731 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "multinode-416000"
	I0910 11:05:20.249138    4731 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:05:20.249144    4731 fix.go:54] fixHost starting: 
	I0910 11:05:20.249256    4731 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0910 11:05:20.249263    4731 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:05:20.253542    4731 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0910 11:05:20.261498    4731 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:05:20.261533    4731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a5:c4:a5:46:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:05:20.263442    4731 main.go:141] libmachine: STDOUT: 
	I0910 11:05:20.263462    4731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:05:20.263489    4731 fix.go:56] duration metric: took 14.345958ms for fixHost
	I0910 11:05:20.263493    4731 start.go:83] releasing machines lock for "multinode-416000", held for 14.359167ms
	W0910 11:05:20.263502    4731 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:05:20.263537    4731 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:05:20.263547    4731 start.go:729] Will try again in 5 seconds ...
	I0910 11:05:25.265088    4731 start.go:360] acquireMachinesLock for multinode-416000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:05:25.265512    4731 start.go:364] duration metric: took 316.375µs to acquireMachinesLock for "multinode-416000"
	I0910 11:05:25.265636    4731 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:05:25.265655    4731 fix.go:54] fixHost starting: 
	I0910 11:05:25.266370    4731 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0910 11:05:25.266392    4731 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:05:25.274661    4731 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0910 11:05:25.278741    4731 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:05:25.279039    4731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a5:c4:a5:46:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/multinode-416000/disk.qcow2
	I0910 11:05:25.288012    4731 main.go:141] libmachine: STDOUT: 
	I0910 11:05:25.288083    4731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:05:25.288156    4731 fix.go:56] duration metric: took 22.500625ms for fixHost
	I0910 11:05:25.288173    4731 start.go:83] releasing machines lock for "multinode-416000", held for 22.637084ms
	W0910 11:05:25.288360    4731 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:05:25.295695    4731 out.go:201] 
	W0910 11:05:25.299817    4731 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:05:25.299841    4731 out.go:270] * 
	* 
	W0910 11:05:25.302757    4731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:05:25.309757    4731 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (67.615542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-416000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000-m01 --driver=qemu2 : exit status 80 (9.852038875s)

                                                
                                                
-- stdout --
	* [multinode-416000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-416000-m01" primary control-plane node in "multinode-416000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-416000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000-m02 --driver=qemu2 : exit status 80 (9.954211625s)

                                                
                                                
-- stdout --
	* [multinode-416000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-416000-m02" primary control-plane node in "multinode-416000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-416000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-416000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-416000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-416000: exit status 83 (81.15125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-416000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.431458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.04s)

                                                
                                    
x
+
TestPreload (10.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-540000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-540000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.893393166s)

                                                
                                                
-- stdout --
	* [test-preload-540000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-540000" primary control-plane node in "test-preload-540000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:05:45.568518    4797 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:05:45.568639    4797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:45.568643    4797 out.go:358] Setting ErrFile to fd 2...
	I0910 11:05:45.568645    4797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:05:45.568780    4797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:05:45.569896    4797 out.go:352] Setting JSON to false
	I0910 11:05:45.586102    4797 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3909,"bootTime":1725987636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:05:45.586166    4797 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:05:45.594542    4797 out.go:177] * [test-preload-540000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:05:45.602470    4797 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:05:45.602508    4797 notify.go:220] Checking for updates...
	I0910 11:05:45.610461    4797 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:05:45.613432    4797 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:05:45.616484    4797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:05:45.619490    4797 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:05:45.622406    4797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:05:45.625794    4797 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:05:45.625845    4797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:05:45.629527    4797 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:05:45.636442    4797 start.go:297] selected driver: qemu2
	I0910 11:05:45.636448    4797 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:05:45.636454    4797 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:05:45.638823    4797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:05:45.641432    4797 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:05:45.644468    4797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:05:45.644487    4797 cni.go:84] Creating CNI manager for ""
	I0910 11:05:45.644493    4797 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:05:45.644497    4797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:05:45.644518    4797 start.go:340] cluster config:
	{Name:test-preload-540000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:05:45.648354    4797 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.655446    4797 out.go:177] * Starting "test-preload-540000" primary control-plane node in "test-preload-540000" cluster
	I0910 11:05:45.659463    4797 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0910 11:05:45.659538    4797 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/test-preload-540000/config.json ...
	I0910 11:05:45.659558    4797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/test-preload-540000/config.json: {Name:mk5c0ebc5e8b5aecc979509f57e9d8d1fe910557 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:05:45.659584    4797 cache.go:107] acquiring lock: {Name:mkba30879662063684511fc7c481b54c087f476e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659613    4797 cache.go:107] acquiring lock: {Name:mkdfa79a1cdb2bd7016d1fd8de951784b1851c38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659629    4797 cache.go:107] acquiring lock: {Name:mk05c9368cce11eec89c5b60d0c65f41cd362a9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659757    4797 cache.go:107] acquiring lock: {Name:mk06ce94e3b7e3ca8885184edeca4f7e5645ca7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659817    4797 cache.go:107] acquiring lock: {Name:mkc4fa1a19974b62696714628b72c14002627f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659822    4797 cache.go:107] acquiring lock: {Name:mkc6f231c819342739d936be035015e0a73ef5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659857    4797 cache.go:107] acquiring lock: {Name:mk8d8f591f8cb58f0243629299c5d51095eaabd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659872    4797 cache.go:107] acquiring lock: {Name:mk23ff74933ec7f62df4f3cec449a975d84b91b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:05:45.659917    4797 start.go:360] acquireMachinesLock for test-preload-540000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:05:45.659957    4797 start.go:364] duration metric: took 31.958µs to acquireMachinesLock for "test-preload-540000"
	I0910 11:05:45.659971    4797 start.go:93] Provisioning new machine with config: &{Name:test-preload-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:05:45.660083    4797 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:05:45.660104    4797 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0910 11:05:45.660139    4797 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0910 11:05:45.660146    4797 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0910 11:05:45.660150    4797 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:05:45.660155    4797 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0910 11:05:45.660160    4797 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:05:45.660143    4797 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:05:45.660058    4797 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0910 11:05:45.668467    4797 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:05:45.672449    4797 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0910 11:05:45.673388    4797 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:05:45.673535    4797 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0910 11:05:45.673579    4797 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0910 11:05:45.675037    4797 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0910 11:05:45.675041    4797 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:05:45.675059    4797 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:05:45.675158    4797 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0910 11:05:45.687335    4797 start.go:159] libmachine.API.Create for "test-preload-540000" (driver="qemu2")
	I0910 11:05:45.687356    4797 client.go:168] LocalClient.Create starting
	I0910 11:05:45.687444    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:05:45.687477    4797 main.go:141] libmachine: Decoding PEM data...
	I0910 11:05:45.687486    4797 main.go:141] libmachine: Parsing certificate...
	I0910 11:05:45.687524    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:05:45.687559    4797 main.go:141] libmachine: Decoding PEM data...
	I0910 11:05:45.687569    4797 main.go:141] libmachine: Parsing certificate...
	I0910 11:05:45.687981    4797 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:05:45.847423    4797 main.go:141] libmachine: Creating SSH key...
	I0910 11:05:45.897414    4797 main.go:141] libmachine: Creating Disk image...
	I0910 11:05:45.897438    4797 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:05:45.897668    4797 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2
	I0910 11:05:45.908329    4797 main.go:141] libmachine: STDOUT: 
	I0910 11:05:45.908361    4797 main.go:141] libmachine: STDERR: 
	I0910 11:05:45.908468    4797 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2 +20000M
	I0910 11:05:45.924095    4797 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:05:45.924127    4797 main.go:141] libmachine: STDERR: 
	I0910 11:05:45.924139    4797 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2
	I0910 11:05:45.924144    4797 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:05:45.924159    4797 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:05:45.924184    4797 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:20:b4:6f:54:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2
	I0910 11:05:45.925964    4797 main.go:141] libmachine: STDOUT: 
	I0910 11:05:45.925991    4797 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:05:45.926008    4797 client.go:171] duration metric: took 238.654542ms to LocalClient.Create
	I0910 11:05:46.613271    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0910 11:05:46.659035    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0910 11:05:46.693687    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0910 11:05:46.721062    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0910 11:05:46.811349    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0910 11:05:46.813171    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0910 11:05:46.822028    4797 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0910 11:05:46.822090    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0910 11:05:46.946909    4797 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0910 11:05:46.946983    4797 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.287247083s
	I0910 11:05:46.947024    4797 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0910 11:05:46.980489    4797 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0910 11:05:46.980568    4797 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 11:05:47.858517    4797 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0910 11:05:47.858566    4797 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.198864833s
	I0910 11:05:47.858592    4797 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0910 11:05:47.926160    4797 start.go:128] duration metric: took 2.266090584s to createHost
	I0910 11:05:47.926226    4797 start.go:83] releasing machines lock for "test-preload-540000", held for 2.266321833s
	W0910 11:05:47.926295    4797 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:05:47.939282    4797 out.go:177] * Deleting "test-preload-540000" in qemu2 ...
	W0910 11:05:47.972750    4797 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:05:47.972790    4797 start.go:729] Will try again in 5 seconds ...
	I0910 11:05:48.239531    4797 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0910 11:05:48.239587    4797 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.579917792s
	I0910 11:05:48.239616    4797 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0910 11:05:48.790547    4797 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0910 11:05:48.790593    4797 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.130818084s
	I0910 11:05:48.790638    4797 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0910 11:05:50.706708    4797 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0910 11:05:50.706751    4797 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.047307834s
	I0910 11:05:50.706776    4797 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0910 11:05:51.281327    4797 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0910 11:05:51.281374    4797 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.621920209s
	I0910 11:05:51.281403    4797 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0910 11:05:51.375383    4797 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0910 11:05:51.375430    4797 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.715987708s
	I0910 11:05:51.375463    4797 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0910 11:05:52.974758    4797 start.go:360] acquireMachinesLock for test-preload-540000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:05:52.975277    4797 start.go:364] duration metric: took 379.917µs to acquireMachinesLock for "test-preload-540000"
	I0910 11:05:52.975422    4797 start.go:93] Provisioning new machine with config: &{Name:test-preload-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:05:52.975675    4797 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:05:52.985319    4797 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:05:53.036900    4797 start.go:159] libmachine.API.Create for "test-preload-540000" (driver="qemu2")
	I0910 11:05:53.036943    4797 client.go:168] LocalClient.Create starting
	I0910 11:05:53.037057    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:05:53.037132    4797 main.go:141] libmachine: Decoding PEM data...
	I0910 11:05:53.037149    4797 main.go:141] libmachine: Parsing certificate...
	I0910 11:05:53.037207    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:05:53.037252    4797 main.go:141] libmachine: Decoding PEM data...
	I0910 11:05:53.037263    4797 main.go:141] libmachine: Parsing certificate...
	I0910 11:05:53.037763    4797 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:05:53.209989    4797 main.go:141] libmachine: Creating SSH key...
	I0910 11:05:53.356668    4797 main.go:141] libmachine: Creating Disk image...
	I0910 11:05:53.356678    4797 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:05:53.356885    4797 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2
	I0910 11:05:53.366443    4797 main.go:141] libmachine: STDOUT: 
	I0910 11:05:53.366469    4797 main.go:141] libmachine: STDERR: 
	I0910 11:05:53.366507    4797 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2 +20000M
	I0910 11:05:53.374620    4797 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:05:53.374634    4797 main.go:141] libmachine: STDERR: 
	I0910 11:05:53.374643    4797 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2
	I0910 11:05:53.374650    4797 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:05:53.374662    4797 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:05:53.374699    4797 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:e8:c8:05:a0:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/test-preload-540000/disk.qcow2
	I0910 11:05:53.376453    4797 main.go:141] libmachine: STDOUT: 
	I0910 11:05:53.376470    4797 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:05:53.376483    4797 client.go:171] duration metric: took 339.545083ms to LocalClient.Create
	I0910 11:05:55.378348    4797 start.go:128] duration metric: took 2.402713625s to createHost
	I0910 11:05:55.378387    4797 start.go:83] releasing machines lock for "test-preload-540000", held for 2.403141458s
	W0910 11:05:55.378596    4797 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:05:55.394198    4797 out.go:201] 
	W0910 11:05:55.398043    4797 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:05:55.398079    4797 out.go:270] * 
	* 
	W0910 11:05:55.400542    4797 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:05:55.417071    4797 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-540000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-09-10 11:05:55.436446 -0700 PDT m=+2254.897672918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-540000 -n test-preload-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-540000 -n test-preload-540000: exit status 7 (69.108542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-540000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-540000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-540000
--- FAIL: TestPreload (10.05s)

                                                
                                    
x
+
TestScheduledStopUnix (10.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-495000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-495000 --memory=2048 --driver=qemu2 : exit status 80 (9.9755535s)

                                                
                                                
-- stdout --
	* [scheduled-stop-495000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-495000" primary control-plane node in "scheduled-stop-495000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-495000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-495000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-495000" primary control-plane node in "scheduled-stop-495000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-495000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-09-10 11:06:05.561412 -0700 PDT m=+2265.022913001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-495000 -n scheduled-stop-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-495000 -n scheduled-stop-495000: exit status 7 (69.312625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-495000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-495000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-495000
--- FAIL: TestScheduledStopUnix (10.13s)

                                                
                                    
x
+
TestSkaffold (12.31s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2605923744 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2605923744 version: (1.065480042s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-817000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-817000 --memory=2600 --driver=qemu2 : exit status 80 (9.908190959s)

                                                
                                                
-- stdout --
	* [skaffold-817000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-817000" primary control-plane node in "skaffold-817000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-817000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-817000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-817000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-817000" primary control-plane node in "skaffold-817000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-817000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-817000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-09-10 11:06:17.868802 -0700 PDT m=+2277.330632001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-817000 -n skaffold-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-817000 -n skaffold-817000: exit status 7 (63.2765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-817000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-817000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-817000
--- FAIL: TestSkaffold (12.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (615.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.872813466 start -p running-upgrade-978000 --memory=2200 --vm-driver=qemu2 
E0910 11:07:07.499463    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 11:07:20.649741    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.872813466 start -p running-upgrade-978000 --memory=2200 --vm-driver=qemu2 : (1m5.764253041s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-978000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0910 11:08:43.728689    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-978000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.115217042s)

                                                
                                                
-- stdout --
	* [running-upgrade-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-978000" primary control-plane node in "running-upgrade-978000" cluster
	* Updating the running qemu2 "running-upgrade-978000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:08:12.229047    5250 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:08:12.229183    5250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:08:12.229187    5250 out.go:358] Setting ErrFile to fd 2...
	I0910 11:08:12.229190    5250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:08:12.229325    5250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:08:12.230465    5250 out.go:352] Setting JSON to false
	I0910 11:08:12.247154    5250 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4056,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:08:12.247232    5250 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:08:12.251631    5250 out.go:177] * [running-upgrade-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:08:12.258631    5250 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:08:12.258721    5250 notify.go:220] Checking for updates...
	I0910 11:08:12.266524    5250 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:08:12.270570    5250 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:08:12.273607    5250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:08:12.276561    5250 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:08:12.283575    5250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:08:12.286796    5250 config.go:182] Loaded profile config "running-upgrade-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:08:12.291537    5250 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 11:08:12.295586    5250 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:08:12.299542    5250 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:08:12.308562    5250 start.go:297] selected driver: qemu2
	I0910 11:08:12.308568    5250 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50307 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:08:12.308614    5250 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:08:12.311098    5250 cni.go:84] Creating CNI manager for ""
	I0910 11:08:12.311115    5250 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:08:12.311144    5250 start.go:340] cluster config:
	{Name:running-upgrade-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50307 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:08:12.311200    5250 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:08:12.318535    5250 out.go:177] * Starting "running-upgrade-978000" primary control-plane node in "running-upgrade-978000" cluster
	I0910 11:08:12.324548    5250 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0910 11:08:12.324574    5250 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0910 11:08:12.324582    5250 cache.go:56] Caching tarball of preloaded images
	I0910 11:08:12.324650    5250 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:08:12.324656    5250 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0910 11:08:12.324713    5250 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/config.json ...
	I0910 11:08:12.325010    5250 start.go:360] acquireMachinesLock for running-upgrade-978000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:08:12.325041    5250 start.go:364] duration metric: took 21.708µs to acquireMachinesLock for "running-upgrade-978000"
	I0910 11:08:12.325049    5250 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:08:12.325055    5250 fix.go:54] fixHost starting: 
	I0910 11:08:12.325616    5250 fix.go:112] recreateIfNeeded on running-upgrade-978000: state=Running err=<nil>
	W0910 11:08:12.325626    5250 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:08:12.330545    5250 out.go:177] * Updating the running qemu2 "running-upgrade-978000" VM ...
	I0910 11:08:12.338502    5250 machine.go:93] provisionDockerMachine start ...
	I0910 11:08:12.338539    5250 main.go:141] libmachine: Using SSH client type: native
	I0910 11:08:12.338645    5250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a2bba0] 0x100a2e400 <nil>  [] 0s} localhost 50275 <nil> <nil>}
	I0910 11:08:12.338649    5250 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 11:08:12.401213    5250 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-978000
	
	I0910 11:08:12.401231    5250 buildroot.go:166] provisioning hostname "running-upgrade-978000"
	I0910 11:08:12.401272    5250 main.go:141] libmachine: Using SSH client type: native
	I0910 11:08:12.401395    5250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a2bba0] 0x100a2e400 <nil>  [] 0s} localhost 50275 <nil> <nil>}
	I0910 11:08:12.401401    5250 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-978000 && echo "running-upgrade-978000" | sudo tee /etc/hostname
	I0910 11:08:12.465745    5250 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-978000
	
	I0910 11:08:12.465797    5250 main.go:141] libmachine: Using SSH client type: native
	I0910 11:08:12.465913    5250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a2bba0] 0x100a2e400 <nil>  [] 0s} localhost 50275 <nil> <nil>}
	I0910 11:08:12.465921    5250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-978000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-978000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-978000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 11:08:12.525641    5250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 11:08:12.525654    5250 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19598-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19598-1276/.minikube}
	I0910 11:08:12.525663    5250 buildroot.go:174] setting up certificates
	I0910 11:08:12.525671    5250 provision.go:84] configureAuth start
	I0910 11:08:12.525676    5250 provision.go:143] copyHostCerts
	I0910 11:08:12.525755    5250 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem, removing ...
	I0910 11:08:12.525761    5250 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem
	I0910 11:08:12.525877    5250 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem (1078 bytes)
	I0910 11:08:12.526076    5250 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem, removing ...
	I0910 11:08:12.526080    5250 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem
	I0910 11:08:12.526126    5250 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem (1123 bytes)
	I0910 11:08:12.526224    5250 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem, removing ...
	I0910 11:08:12.526227    5250 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem
	I0910 11:08:12.526266    5250 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem (1675 bytes)
	I0910 11:08:12.526350    5250 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-978000 san=[127.0.0.1 localhost minikube running-upgrade-978000]
	I0910 11:08:12.689662    5250 provision.go:177] copyRemoteCerts
	I0910 11:08:12.689703    5250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 11:08:12.689712    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	I0910 11:08:12.722863    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 11:08:12.730035    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 11:08:12.736910    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 11:08:12.744409    5250 provision.go:87] duration metric: took 218.737417ms to configureAuth
	I0910 11:08:12.744421    5250 buildroot.go:189] setting minikube options for container-runtime
	I0910 11:08:12.744535    5250 config.go:182] Loaded profile config "running-upgrade-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:08:12.744568    5250 main.go:141] libmachine: Using SSH client type: native
	I0910 11:08:12.744671    5250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a2bba0] 0x100a2e400 <nil>  [] 0s} localhost 50275 <nil> <nil>}
	I0910 11:08:12.744676    5250 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 11:08:12.803943    5250 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 11:08:12.803953    5250 buildroot.go:70] root file system type: tmpfs
	I0910 11:08:12.804002    5250 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 11:08:12.804058    5250 main.go:141] libmachine: Using SSH client type: native
	I0910 11:08:12.804170    5250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a2bba0] 0x100a2e400 <nil>  [] 0s} localhost 50275 <nil> <nil>}
	I0910 11:08:12.804206    5250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 11:08:12.868575    5250 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 11:08:12.868631    5250 main.go:141] libmachine: Using SSH client type: native
	I0910 11:08:12.868747    5250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a2bba0] 0x100a2e400 <nil>  [] 0s} localhost 50275 <nil> <nil>}
	I0910 11:08:12.868761    5250 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 11:08:12.929543    5250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 11:08:12.929555    5250 machine.go:96] duration metric: took 591.063125ms to provisionDockerMachine
	I0910 11:08:12.929559    5250 start.go:293] postStartSetup for "running-upgrade-978000" (driver="qemu2")
	I0910 11:08:12.929566    5250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 11:08:12.929615    5250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 11:08:12.929638    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	I0910 11:08:12.962734    5250 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 11:08:12.964164    5250 info.go:137] Remote host: Buildroot 2021.02.12
	I0910 11:08:12.964172    5250 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/addons for local assets ...
	I0910 11:08:12.964241    5250 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/files for local assets ...
	I0910 11:08:12.964330    5250 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem -> 17952.pem in /etc/ssl/certs
	I0910 11:08:12.964431    5250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 11:08:12.967614    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem --> /etc/ssl/certs/17952.pem (1708 bytes)
	I0910 11:08:12.975142    5250 start.go:296] duration metric: took 45.578667ms for postStartSetup
	I0910 11:08:12.975155    5250 fix.go:56] duration metric: took 650.119541ms for fixHost
	I0910 11:08:12.975197    5250 main.go:141] libmachine: Using SSH client type: native
	I0910 11:08:12.975301    5250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a2bba0] 0x100a2e400 <nil>  [] 0s} localhost 50275 <nil> <nil>}
	I0910 11:08:12.975307    5250 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 11:08:13.036863    5250 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991693.157551496
	
	I0910 11:08:13.036871    5250 fix.go:216] guest clock: 1725991693.157551496
	I0910 11:08:13.036875    5250 fix.go:229] Guest: 2024-09-10 11:08:13.157551496 -0700 PDT Remote: 2024-09-10 11:08:12.975157 -0700 PDT m=+0.766293042 (delta=182.394496ms)
	I0910 11:08:13.036886    5250 fix.go:200] guest clock delta is within tolerance: 182.394496ms
	I0910 11:08:13.036888    5250 start.go:83] releasing machines lock for "running-upgrade-978000", held for 711.861833ms
	I0910 11:08:13.036949    5250 ssh_runner.go:195] Run: cat /version.json
	I0910 11:08:13.036959    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	I0910 11:08:13.036949    5250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 11:08:13.036986    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	W0910 11:08:13.037520    5250 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50275: connect: connection refused
	I0910 11:08:13.037542    5250 retry.go:31] will retry after 141.818619ms: dial tcp [::1]:50275: connect: connection refused
	W0910 11:08:13.213046    5250 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0910 11:08:13.213122    5250 ssh_runner.go:195] Run: systemctl --version
	I0910 11:08:13.215032    5250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 11:08:13.216924    5250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 11:08:13.216948    5250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0910 11:08:13.219832    5250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0910 11:08:13.224433    5250 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 11:08:13.224441    5250 start.go:495] detecting cgroup driver to use...
	I0910 11:08:13.224509    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 11:08:13.229581    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0910 11:08:13.232332    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 11:08:13.235537    5250 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 11:08:13.235560    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 11:08:13.238976    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 11:08:13.242150    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 11:08:13.245286    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 11:08:13.248027    5250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 11:08:13.251330    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 11:08:13.254554    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 11:08:13.257281    5250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 11:08:13.259961    5250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 11:08:13.263060    5250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 11:08:13.265771    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:08:13.362553    5250 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 11:08:13.369194    5250 start.go:495] detecting cgroup driver to use...
	I0910 11:08:13.369281    5250 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 11:08:13.377049    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 11:08:13.381860    5250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 11:08:13.390269    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 11:08:13.394866    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 11:08:13.399410    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 11:08:13.405028    5250 ssh_runner.go:195] Run: which cri-dockerd
	I0910 11:08:13.406278    5250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 11:08:13.408969    5250 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0910 11:08:13.413613    5250 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 11:08:13.511659    5250 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 11:08:13.599759    5250 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 11:08:13.599823    5250 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 11:08:13.605301    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:08:13.692985    5250 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 11:08:26.226749    5250 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.534079917s)
	I0910 11:08:26.226819    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 11:08:26.231821    5250 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0910 11:08:26.240456    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 11:08:26.246177    5250 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 11:08:26.316034    5250 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 11:08:26.406356    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:08:26.479231    5250 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 11:08:26.485485    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 11:08:26.489932    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:08:26.570616    5250 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 11:08:26.611523    5250 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 11:08:26.611596    5250 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 11:08:26.615278    5250 start.go:563] Will wait 60s for crictl version
	I0910 11:08:26.615345    5250 ssh_runner.go:195] Run: which crictl
	I0910 11:08:26.616675    5250 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 11:08:26.628542    5250 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0910 11:08:26.628613    5250 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 11:08:26.641225    5250 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 11:08:26.661704    5250 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0910 11:08:26.661852    5250 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0910 11:08:26.663274    5250 kubeadm.go:883] updating cluster {Name:running-upgrade-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50307 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0910 11:08:26.663319    5250 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0910 11:08:26.663363    5250 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 11:08:26.674126    5250 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 11:08:26.674135    5250 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0910 11:08:26.674382    5250 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 11:08:26.677979    5250 ssh_runner.go:195] Run: which lz4
	I0910 11:08:26.679230    5250 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 11:08:26.680415    5250 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 11:08:26.680425    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0910 11:08:27.632361    5250 docker.go:649] duration metric: took 953.184958ms to copy over tarball
	I0910 11:08:27.632423    5250 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 11:08:28.788976    5250 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156566209s)
	I0910 11:08:28.788989    5250 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 11:08:28.804612    5250 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 11:08:28.807710    5250 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0910 11:08:28.812587    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:08:28.895969    5250 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 11:08:30.155518    5250 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.259564625s)
	I0910 11:08:30.155613    5250 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 11:08:30.167459    5250 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 11:08:30.167468    5250 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0910 11:08:30.167472    5250 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 11:08:30.174720    5250 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:08:30.176079    5250 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:08:30.177691    5250 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:08:30.177873    5250 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:08:30.179388    5250 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:08:30.179416    5250 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:08:30.180319    5250 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:08:30.180967    5250 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:08:30.181851    5250 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:08:30.182188    5250 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:08:30.183406    5250 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:08:30.183687    5250 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:08:30.184265    5250 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:08:30.184275    5250 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0910 11:08:30.185632    5250 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:08:30.186149    5250 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0910 11:08:31.184788    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:08:31.184785    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:08:31.231642    5250 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0910 11:08:31.231696    5250 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:08:31.231809    5250 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:08:31.232488    5250 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0910 11:08:31.232515    5250 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:08:31.232552    5250 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:08:31.238808    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:08:31.265441    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0910 11:08:31.266245    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0910 11:08:31.269464    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:08:31.276559    5250 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0910 11:08:31.276581    5250 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:08:31.276630    5250 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0910 11:08:31.279292    5250 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0910 11:08:31.279373    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:08:31.280638    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0910 11:08:31.294088    5250 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0910 11:08:31.294111    5250 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:08:31.294161    5250 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:08:31.306517    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0910 11:08:31.320198    5250 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0910 11:08:31.320219    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0910 11:08:31.320230    5250 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:08:31.320202    5250 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0910 11:08:31.320263    5250 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0910 11:08:31.320282    5250 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:08:31.320284    5250 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0910 11:08:31.335992    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0910 11:08:31.335996    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0910 11:08:31.336117    5250 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0910 11:08:31.336118    5250 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0910 11:08:31.337931    5250 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0910 11:08:31.337950    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0910 11:08:31.338100    5250 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0910 11:08:31.338107    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0910 11:08:31.343072    5250 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0910 11:08:31.343174    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:08:31.357317    5250 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0910 11:08:31.370981    5250 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0910 11:08:31.371007    5250 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:08:31.371073    5250 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:08:31.379287    5250 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0910 11:08:31.379303    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0910 11:08:31.388855    5250 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0910 11:08:31.388876    5250 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:08:31.388938    5250 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0910 11:08:32.397152    5250 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.026068208s)
	I0910 11:08:32.397195    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 11:08:32.397256    5250 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load": (1.017964208s)
	I0910 11:08:32.397275    5250 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0910 11:08:32.397319    5250 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0910 11:08:32.397338    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0910 11:08:32.397397    5250 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/etcd:3.5.3-0: (1.008471042s)
	I0910 11:08:32.397441    5250 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0910 11:08:32.397595    5250 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 11:08:32.472908    5250 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0910 11:08:32.472928    5250 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0910 11:08:32.472953    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0910 11:08:32.506538    5250 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 11:08:32.506552    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0910 11:08:32.756199    5250 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 11:08:32.756246    5250 cache_images.go:92] duration metric: took 2.588835834s to LoadCachedImages
	W0910 11:08:32.756281    5250 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0910 11:08:32.756291    5250 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0910 11:08:32.756344    5250 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-978000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 11:08:32.756404    5250 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 11:08:32.773551    5250 cni.go:84] Creating CNI manager for ""
	I0910 11:08:32.773562    5250 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:08:32.773567    5250 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 11:08:32.773575    5250 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-978000 NodeName:running-upgrade-978000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 11:08:32.773649    5250 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-978000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 11:08:32.773717    5250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0910 11:08:32.776684    5250 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 11:08:32.776717    5250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 11:08:32.779414    5250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0910 11:08:32.784474    5250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 11:08:32.789483    5250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0910 11:08:32.794693    5250 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0910 11:08:32.795898    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:08:32.885450    5250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 11:08:32.890521    5250 certs.go:68] Setting up /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000 for IP: 10.0.2.15
	I0910 11:08:32.890529    5250 certs.go:194] generating shared ca certs ...
	I0910 11:08:32.890537    5250 certs.go:226] acquiring lock for ca certs: {Name:mk5b237e8da18ff05d2622f0be5a14dbe0d9b4f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:08:32.890705    5250 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key
	I0910 11:08:32.890760    5250 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key
	I0910 11:08:32.890766    5250 certs.go:256] generating profile certs ...
	I0910 11:08:32.890844    5250 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/client.key
	I0910 11:08:32.890864    5250 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.key.e1b1967a
	I0910 11:08:32.890875    5250 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.crt.e1b1967a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0910 11:08:32.944816    5250 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.crt.e1b1967a ...
	I0910 11:08:32.944823    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.crt.e1b1967a: {Name:mk44058f77a03c1e32dbb5f59df753bebb89c4dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:08:32.945086    5250 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.key.e1b1967a ...
	I0910 11:08:32.945091    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.key.e1b1967a: {Name:mk337de922b02c447b1bcd76f705787307959df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:08:32.945229    5250 certs.go:381] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.crt.e1b1967a -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.crt
	I0910 11:08:32.945382    5250 certs.go:385] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.key.e1b1967a -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.key
	I0910 11:08:32.945548    5250 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/proxy-client.key
	I0910 11:08:32.945680    5250 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795.pem (1338 bytes)
	W0910 11:08:32.945708    5250 certs.go:480] ignoring /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795_empty.pem, impossibly tiny 0 bytes
	I0910 11:08:32.945714    5250 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 11:08:32.945740    5250 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem (1078 bytes)
	I0910 11:08:32.945767    5250 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem (1123 bytes)
	I0910 11:08:32.945791    5250 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem (1675 bytes)
	I0910 11:08:32.945845    5250 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem (1708 bytes)
	I0910 11:08:32.946181    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 11:08:32.953952    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 11:08:32.961629    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 11:08:32.968620    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 11:08:32.975447    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 11:08:32.982661    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 11:08:32.989916    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 11:08:32.997156    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 11:08:33.003798    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 11:08:33.011604    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795.pem --> /usr/share/ca-certificates/1795.pem (1338 bytes)
	I0910 11:08:33.018892    5250 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem --> /usr/share/ca-certificates/17952.pem (1708 bytes)
	I0910 11:08:33.026173    5250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 11:08:33.031725    5250 ssh_runner.go:195] Run: openssl version
	I0910 11:08:33.033617    5250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 11:08:33.036571    5250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:08:33.038086    5250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:08:33.038109    5250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:08:33.040118    5250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 11:08:33.042821    5250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1795.pem && ln -fs /usr/share/ca-certificates/1795.pem /etc/ssl/certs/1795.pem"
	I0910 11:08:33.046994    5250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1795.pem
	I0910 11:08:33.048581    5250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:44 /usr/share/ca-certificates/1795.pem
	I0910 11:08:33.048602    5250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1795.pem
	I0910 11:08:33.050385    5250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1795.pem /etc/ssl/certs/51391683.0"
	I0910 11:08:33.053615    5250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17952.pem && ln -fs /usr/share/ca-certificates/17952.pem /etc/ssl/certs/17952.pem"
	I0910 11:08:33.056578    5250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17952.pem
	I0910 11:08:33.057981    5250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:44 /usr/share/ca-certificates/17952.pem
	I0910 11:08:33.057997    5250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17952.pem
	I0910 11:08:33.059865    5250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17952.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 11:08:33.062900    5250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 11:08:33.064579    5250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 11:08:33.066477    5250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 11:08:33.068487    5250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 11:08:33.070287    5250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 11:08:33.072450    5250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 11:08:33.074204    5250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 11:08:33.076127    5250 kubeadm.go:392] StartCluster: {Name:running-upgrade-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50307 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:08:33.076198    5250 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 11:08:33.086421    5250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 11:08:33.090547    5250 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 11:08:33.090554    5250 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 11:08:33.090578    5250 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 11:08:33.093748    5250 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 11:08:33.093984    5250 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-978000" does not appear in /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:08:33.094039    5250 kubeconfig.go:62] /Users/jenkins/minikube-integration/19598-1276/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-978000" cluster setting kubeconfig missing "running-upgrade-978000" context setting]
	I0910 11:08:33.094187    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:08:33.094853    5250 kapi.go:59] client config for running-upgrade-978000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/client.key", CAFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ff2200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 11:08:33.095175    5250 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 11:08:33.098282    5250 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-978000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0910 11:08:33.098288    5250 kubeadm.go:1160] stopping kube-system containers ...
	I0910 11:08:33.098325    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 11:08:33.109896    5250 docker.go:483] Stopping containers: [47ae4dc54999 c0ef9124ecc4 e49764610866 296a4d729754 31602e89a910 96a1e8330645 c42b4d7f7d01 a1e228399b97 a9bb1278a4a7 187afc5938ed 13966ceb0569 e202bd667108 418cf9ccc5a6 9c9fdc4a777c]
	I0910 11:08:33.109965    5250 ssh_runner.go:195] Run: docker stop 47ae4dc54999 c0ef9124ecc4 e49764610866 296a4d729754 31602e89a910 96a1e8330645 c42b4d7f7d01 a1e228399b97 a9bb1278a4a7 187afc5938ed 13966ceb0569 e202bd667108 418cf9ccc5a6 9c9fdc4a777c
	I0910 11:08:33.121333    5250 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 11:08:33.211530    5250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 11:08:33.215392    5250 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 10 18:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 10 18:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 10 18:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 10 18:07 /etc/kubernetes/scheduler.conf
	
	I0910 11:08:33.215426    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/admin.conf
	I0910 11:08:33.218848    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 11:08:33.218876    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 11:08:33.222190    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/kubelet.conf
	I0910 11:08:33.225368    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 11:08:33.225393    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 11:08:33.228327    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/controller-manager.conf
	I0910 11:08:33.230902    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 11:08:33.230928    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 11:08:33.233824    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/scheduler.conf
	I0910 11:08:33.236604    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 11:08:33.236627    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 11:08:33.239141    5250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 11:08:33.242263    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:08:33.264427    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:08:33.621632    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:08:33.828085    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:08:33.849412    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:08:33.871780    5250 api_server.go:52] waiting for apiserver process to appear ...
	I0910 11:08:33.871864    5250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:08:34.373927    5250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:08:34.874250    5250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:08:35.373886    5250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:08:35.378394    5250 api_server.go:72] duration metric: took 1.506656291s to wait for apiserver process to appear ...
	I0910 11:08:35.378403    5250 api_server.go:88] waiting for apiserver healthz status ...
	I0910 11:08:35.378416    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:08:40.380490    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:08:40.380571    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:08:45.381370    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:08:45.381448    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:08:50.382228    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:08:50.382255    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:08:55.383079    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:08:55.383166    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:00.384650    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:00.384735    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:05.386473    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:05.386513    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:10.388413    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:10.388441    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:15.388729    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:15.388809    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:20.391261    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:20.391284    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:25.392407    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:25.392496    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:30.393327    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:30.393408    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:35.395357    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:35.395779    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:09:35.435946    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:09:35.436088    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:09:35.456763    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:09:35.456866    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:09:35.472206    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:09:35.472280    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:09:35.484264    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:09:35.484349    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:09:35.495602    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:09:35.495682    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:09:35.505657    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:09:35.505727    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:09:35.516436    5250 logs.go:276] 0 containers: []
	W0910 11:09:35.516450    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:09:35.516517    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:09:35.527135    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:09:35.527155    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:09:35.527161    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:09:35.568619    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:09:35.568631    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:09:35.582991    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:09:35.583001    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:09:35.594458    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:09:35.594471    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:09:35.610365    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:09:35.610375    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:09:35.622352    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:09:35.622371    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:09:35.637501    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:09:35.637515    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:09:35.649428    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:09:35.649442    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:09:35.653658    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:09:35.653668    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:09:35.726440    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:09:35.726452    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:09:35.740541    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:09:35.740552    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:09:35.752190    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:09:35.752203    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:09:35.764690    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:09:35.764702    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:09:35.778927    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:09:35.778940    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:09:35.796340    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:09:35.796351    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:09:35.823233    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:09:35.823243    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:09:35.838225    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:09:35.838237    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:09:38.349874    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:43.352583    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:43.353023    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:09:43.393670    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:09:43.393806    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:09:43.415063    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:09:43.415195    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:09:43.432700    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:09:43.432777    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:09:43.445495    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:09:43.445570    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:09:43.456122    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:09:43.456189    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:09:43.466815    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:09:43.466886    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:09:43.476685    5250 logs.go:276] 0 containers: []
	W0910 11:09:43.476699    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:09:43.476754    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:09:43.487371    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:09:43.487388    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:09:43.487393    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:09:43.501448    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:09:43.501459    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:09:43.515486    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:09:43.515497    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:09:43.531835    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:09:43.531848    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:09:43.574986    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:09:43.574997    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:09:43.588174    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:09:43.588187    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:09:43.592892    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:09:43.592897    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:09:43.607340    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:09:43.607351    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:09:43.622249    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:09:43.622262    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:09:43.639975    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:09:43.639986    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:09:43.651004    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:09:43.651017    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:09:43.661762    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:09:43.661773    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:09:43.680244    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:09:43.680258    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:09:43.718441    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:09:43.718455    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:09:43.733219    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:09:43.733232    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:09:43.745088    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:09:43.745101    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:09:43.772496    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:09:43.772508    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:09:46.288420    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:51.291098    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:51.291556    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:09:51.329641    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:09:51.329785    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:09:51.351351    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:09:51.351458    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:09:51.367009    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:09:51.367091    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:09:51.386100    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:09:51.386182    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:09:51.396695    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:09:51.396763    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:09:51.407161    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:09:51.407233    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:09:51.426042    5250 logs.go:276] 0 containers: []
	W0910 11:09:51.426055    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:09:51.426121    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:09:51.436392    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:09:51.436410    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:09:51.436416    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:09:51.450804    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:09:51.450817    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:09:51.465707    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:09:51.465718    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:09:51.507329    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:09:51.507341    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:09:51.542358    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:09:51.542370    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:09:51.556487    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:09:51.556499    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:09:51.573737    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:09:51.573746    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:09:51.586153    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:09:51.586165    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:09:51.600237    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:09:51.600249    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:09:51.615046    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:09:51.615059    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:09:51.626599    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:09:51.626610    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:09:51.643964    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:09:51.643978    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:09:51.669198    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:09:51.669208    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:09:51.680497    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:09:51.680510    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:09:51.685020    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:09:51.685028    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:09:51.698441    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:09:51.698451    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:09:51.713919    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:09:51.713930    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:09:54.229062    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:09:59.231384    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:09:59.231722    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:09:59.263946    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:09:59.264082    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:09:59.284090    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:09:59.284188    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:09:59.298566    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:09:59.298645    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:09:59.310420    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:09:59.310504    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:09:59.321102    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:09:59.321166    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:09:59.332196    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:09:59.332283    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:09:59.342672    5250 logs.go:276] 0 containers: []
	W0910 11:09:59.342684    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:09:59.342742    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:09:59.352904    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:09:59.352949    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:09:59.352954    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:09:59.367204    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:09:59.367213    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:09:59.381028    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:09:59.381038    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:09:59.402799    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:09:59.402812    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:09:59.414376    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:09:59.414385    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:09:59.425843    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:09:59.425854    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:09:59.439965    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:09:59.439975    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:09:59.466084    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:09:59.466093    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:09:59.505023    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:09:59.505030    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:09:59.540081    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:09:59.540093    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:09:59.552550    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:09:59.552561    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:09:59.563946    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:09:59.563965    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:09:59.581241    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:09:59.581255    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:09:59.592541    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:09:59.592554    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:09:59.599252    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:09:59.599260    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:09:59.611060    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:09:59.611075    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:09:59.622418    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:09:59.622433    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:02.138312    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:10:07.140572    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:10:07.140751    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:10:07.155650    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:10:07.155838    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:10:07.168717    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:10:07.168789    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:10:07.182846    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:10:07.182927    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:10:07.196126    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:10:07.196198    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:10:07.206885    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:10:07.206950    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:10:07.217313    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:10:07.217377    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:10:07.227610    5250 logs.go:276] 0 containers: []
	W0910 11:10:07.227657    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:10:07.227718    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:10:07.237950    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:10:07.237968    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:10:07.237974    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:10:07.242043    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:10:07.242049    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:10:07.257775    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:10:07.257793    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:10:07.269001    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:10:07.269011    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:10:07.280907    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:10:07.280918    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:10:07.305348    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:10:07.305354    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:10:07.319486    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:10:07.319503    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:10:07.330676    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:10:07.330685    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:10:07.370603    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:10:07.370609    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:10:07.404760    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:10:07.404776    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:10:07.422082    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:10:07.422093    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:10:07.437049    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:10:07.437060    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:10:07.448610    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:10:07.448621    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:10:07.466821    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:10:07.466832    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:10:07.479524    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:10:07.479539    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:10:07.492159    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:10:07.492169    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:07.506188    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:10:07.506201    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:10:10.020228    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:10:15.022878    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:10:15.023267    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:10:15.060376    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:10:15.060511    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:10:15.081697    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:10:15.081803    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:10:15.107278    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:10:15.107355    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:10:15.118879    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:10:15.118959    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:10:15.129449    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:10:15.129517    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:10:15.139939    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:10:15.140017    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:10:15.150340    5250 logs.go:276] 0 containers: []
	W0910 11:10:15.150351    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:10:15.150406    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:10:15.160625    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:10:15.160643    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:10:15.160648    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:10:15.175394    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:10:15.175408    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:10:15.187000    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:10:15.187009    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:10:15.198514    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:10:15.198526    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:10:15.210274    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:10:15.210285    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:10:15.222252    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:10:15.222266    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:10:15.226661    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:10:15.226669    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:10:15.239342    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:10:15.239353    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:10:15.253638    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:10:15.253650    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:10:15.265205    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:10:15.265215    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:10:15.304446    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:10:15.304453    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:10:15.338381    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:10:15.338393    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:15.352552    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:10:15.352566    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:10:15.369854    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:10:15.369865    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:10:15.387412    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:10:15.387422    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:10:15.402137    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:10:15.402149    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:10:15.413173    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:10:15.413186    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:10:17.939331    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:10:22.941837    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:10:22.942307    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:10:22.984342    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:10:22.984460    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:10:23.004708    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:10:23.004849    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:10:23.019560    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:10:23.019630    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:10:23.033556    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:10:23.033619    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:10:23.047945    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:10:23.048021    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:10:23.058592    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:10:23.058658    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:10:23.068142    5250 logs.go:276] 0 containers: []
	W0910 11:10:23.068151    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:10:23.068199    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:10:23.078712    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:10:23.078728    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:10:23.078734    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:10:23.113606    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:10:23.113617    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:23.127846    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:10:23.127857    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:10:23.138618    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:10:23.138629    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:10:23.158171    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:10:23.158184    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:10:23.169745    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:10:23.169755    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:10:23.184015    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:10:23.184029    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:10:23.195965    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:10:23.195978    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:10:23.207216    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:10:23.207226    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:10:23.225016    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:10:23.225028    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:10:23.264493    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:10:23.264502    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:10:23.268814    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:10:23.268820    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:10:23.282977    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:10:23.282987    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:10:23.295438    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:10:23.295448    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:10:23.309933    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:10:23.309944    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:10:23.322037    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:10:23.322051    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:10:23.338929    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:10:23.338940    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:10:25.864786    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:10:30.867167    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:10:30.867644    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:10:30.906569    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:10:30.906697    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:10:30.928101    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:10:30.928194    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:10:30.942663    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:10:30.942741    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:10:30.954796    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:10:30.954871    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:10:30.965698    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:10:30.965770    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:10:30.976331    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:10:30.976402    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:10:30.987271    5250 logs.go:276] 0 containers: []
	W0910 11:10:30.987282    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:10:30.987344    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:10:30.997450    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:10:30.997467    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:10:30.997474    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:10:31.023171    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:10:31.023178    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:10:31.058041    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:10:31.058053    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:10:31.071753    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:10:31.071763    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:10:31.084440    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:10:31.084454    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:10:31.098697    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:10:31.098710    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:10:31.110876    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:10:31.110890    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:10:31.129021    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:10:31.129031    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:10:31.140614    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:10:31.140626    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:10:31.158241    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:10:31.158251    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:10:31.171309    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:10:31.171322    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:10:31.176101    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:10:31.176109    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:31.200737    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:10:31.200750    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:10:31.214134    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:10:31.214143    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:10:31.229547    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:10:31.229558    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:10:31.242043    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:10:31.242058    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:10:31.282773    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:10:31.282784    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:10:33.796129    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:10:38.798329    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:10:38.798462    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:10:38.815582    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:10:38.815657    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:10:38.827492    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:10:38.827570    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:10:38.838111    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:10:38.838176    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:10:38.848876    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:10:38.848947    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:10:38.860338    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:10:38.860410    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:10:38.873630    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:10:38.873709    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:10:38.884656    5250 logs.go:276] 0 containers: []
	W0910 11:10:38.884669    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:10:38.884729    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:10:38.895660    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:10:38.895681    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:10:38.895687    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:10:38.911748    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:10:38.911762    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:10:38.932579    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:10:38.932591    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:10:38.956564    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:10:38.956578    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:10:38.973093    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:10:38.973106    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:10:39.012532    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:10:39.012545    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:39.026851    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:10:39.026866    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:10:39.044324    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:10:39.044335    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:10:39.063024    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:10:39.063036    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:10:39.090697    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:10:39.090716    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:10:39.134880    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:10:39.134892    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:10:39.147930    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:10:39.147944    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:10:39.167561    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:10:39.167573    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:10:39.179417    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:10:39.179430    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:10:39.184150    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:10:39.184156    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:10:39.198518    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:10:39.198530    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:10:39.211998    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:10:39.212010    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:10:41.727849    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:10:46.730042    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:10:46.730239    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:10:46.752139    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:10:46.752247    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:10:46.767622    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:10:46.767698    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:10:46.779595    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:10:46.779662    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:10:46.790128    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:10:46.790197    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:10:46.800440    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:10:46.800502    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:10:46.811016    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:10:46.811076    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:10:46.821358    5250 logs.go:276] 0 containers: []
	W0910 11:10:46.821366    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:10:46.821443    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:10:46.832233    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:10:46.832257    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:10:46.832263    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:10:46.846644    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:10:46.846654    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:10:46.858387    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:10:46.858401    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:10:46.870003    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:10:46.870014    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:10:46.882819    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:10:46.882830    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:10:46.897478    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:10:46.897488    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:10:46.911942    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:10:46.911956    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:10:46.936507    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:10:46.936517    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:10:46.949279    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:10:46.949290    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:10:46.953531    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:10:46.953540    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:10:46.964741    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:10:46.964752    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:10:46.978931    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:10:46.978940    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:10:47.018967    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:10:47.018977    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:10:47.055311    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:10:47.055323    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:10:47.069709    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:10:47.069722    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:47.083770    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:10:47.083786    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:10:47.105032    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:10:47.105046    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:10:49.619694    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:10:54.621770    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:10:54.621888    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:10:54.637817    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:10:54.637893    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:10:54.653240    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:10:54.653317    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:10:54.664237    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:10:54.664316    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:10:54.675529    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:10:54.675603    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:10:54.686425    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:10:54.686498    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:10:54.697215    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:10:54.697285    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:10:54.707886    5250 logs.go:276] 0 containers: []
	W0910 11:10:54.707898    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:10:54.707955    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:10:54.724747    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:10:54.724763    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:10:54.724769    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:10:54.736581    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:10:54.736592    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:10:54.740911    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:10:54.740918    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:10:54.755074    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:10:54.755084    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:10:54.770205    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:10:54.770214    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:10:54.781888    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:10:54.781901    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:10:54.808085    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:10:54.808094    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:10:54.820105    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:10:54.820116    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:10:54.861735    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:10:54.861747    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:10:54.876355    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:10:54.876365    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:10:54.887968    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:10:54.887983    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:10:54.905955    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:10:54.905965    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:10:54.941681    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:10:54.941691    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:10:54.956939    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:10:54.956952    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:10:54.972448    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:10:54.972461    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:10:54.994436    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:10:54.994450    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:10:55.006812    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:10:55.006826    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:10:57.522786    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:02.524957    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:02.525156    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:02.549742    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:02.549822    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:02.562978    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:02.563069    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:02.573857    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:02.573931    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:02.584608    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:02.584680    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:02.598274    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:02.598363    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:02.609921    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:02.609999    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:02.620452    5250 logs.go:276] 0 containers: []
	W0910 11:11:02.620467    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:02.620532    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:02.631150    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:02.631169    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:02.631175    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:02.646032    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:02.646045    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:02.687700    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:02.687712    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:02.700102    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:02.700116    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:02.713231    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:02.713243    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:02.724425    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:02.724435    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:02.749273    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:02.749282    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:02.792866    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:02.792889    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:02.807508    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:02.807521    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:02.819975    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:02.819987    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:02.831742    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:02.831758    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:02.844688    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:02.844700    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:02.849521    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:02.849535    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:02.862283    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:02.862297    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:02.877024    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:02.877034    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:02.894759    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:02.894777    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:02.909886    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:02.909907    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:05.427233    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:10.429685    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:10.429970    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:10.462608    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:10.462732    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:10.480169    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:10.480260    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:10.493331    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:10.493404    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:10.505857    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:10.505934    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:10.519757    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:10.519824    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:10.530138    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:10.530210    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:10.539979    5250 logs.go:276] 0 containers: []
	W0910 11:11:10.539991    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:10.540049    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:10.550278    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:10.550296    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:10.550301    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:10.585047    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:10.585060    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:10.601717    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:10.601732    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:10.615860    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:10.615870    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:10.632823    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:10.632834    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:10.644154    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:10.644167    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:10.648570    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:10.648577    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:10.662993    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:10.663002    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:10.683230    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:10.683248    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:10.695823    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:10.695835    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:10.707384    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:10.707397    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:10.719544    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:10.719558    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:10.759800    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:10.759826    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:10.774639    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:10.774652    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:10.785758    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:10.785770    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:10.799796    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:10.799808    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:10.811025    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:10.811036    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:13.338355    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:18.339888    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:18.340346    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:18.382113    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:18.382250    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:18.403910    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:18.404002    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:18.418916    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:18.418996    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:18.431671    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:18.431748    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:18.442983    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:18.443043    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:18.453798    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:18.453864    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:18.463901    5250 logs.go:276] 0 containers: []
	W0910 11:11:18.463911    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:18.463971    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:18.474455    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:18.474472    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:18.474477    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:18.486473    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:18.486483    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:18.497962    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:18.497972    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:18.538607    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:18.538618    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:18.572761    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:18.572775    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:18.586550    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:18.586561    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:18.603798    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:18.603812    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:18.615501    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:18.615516    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:18.629307    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:18.629320    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:18.644751    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:18.644767    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:18.664579    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:18.664592    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:18.676029    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:18.676040    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:18.680138    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:18.680144    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:18.695027    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:18.695039    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:18.707126    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:18.707134    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:18.721190    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:18.721203    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:18.732951    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:18.732962    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:21.259477    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:26.262129    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:26.262293    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:26.275211    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:26.275278    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:26.288496    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:26.288565    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:26.302285    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:26.302354    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:26.313774    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:26.313842    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:26.328608    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:26.328677    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:26.340000    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:26.340074    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:26.350628    5250 logs.go:276] 0 containers: []
	W0910 11:11:26.350639    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:26.350697    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:26.361651    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:26.361669    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:26.361675    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:26.404901    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:26.404908    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:26.419964    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:26.419975    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:26.432020    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:26.432035    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:26.448501    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:26.448512    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:26.462451    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:26.462463    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:26.467400    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:26.467407    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:26.481303    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:26.481316    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:26.495607    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:26.495616    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:26.513448    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:26.513460    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:26.538953    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:26.538961    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:26.551721    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:26.551729    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:26.563095    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:26.563105    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:26.577623    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:26.577634    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:26.589291    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:26.589302    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:26.626090    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:26.626101    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:26.640049    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:26.640062    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:29.154380    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:34.156567    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:34.156809    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:34.178439    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:34.178531    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:34.192015    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:34.192099    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:34.203509    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:34.203580    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:34.214189    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:34.214274    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:34.232618    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:34.232689    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:34.243000    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:34.243070    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:34.252740    5250 logs.go:276] 0 containers: []
	W0910 11:11:34.252751    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:34.252810    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:34.262914    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:34.262933    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:34.262939    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:34.281974    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:34.281988    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:34.293597    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:34.293611    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:34.307434    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:34.307447    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:34.322214    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:34.322228    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:34.333268    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:34.333282    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:34.344339    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:34.344349    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:34.362013    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:34.362026    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:34.404561    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:34.404573    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:34.409027    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:34.409033    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:34.447002    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:34.447016    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:34.459149    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:34.459161    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:34.472713    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:34.472727    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:34.484217    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:34.484231    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:34.495613    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:34.495625    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:34.518629    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:34.518638    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:34.532616    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:34.532627    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:37.047090    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:42.049150    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:42.049248    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:42.060901    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:42.060978    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:42.071771    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:42.071851    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:42.082424    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:42.082499    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:42.092741    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:42.092812    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:42.103443    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:42.103511    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:42.114087    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:42.114165    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:42.124990    5250 logs.go:276] 0 containers: []
	W0910 11:11:42.124999    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:42.125052    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:42.135459    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:42.135477    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:42.135483    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:42.150008    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:42.150018    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:42.167469    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:42.167484    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:42.181478    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:42.181489    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:42.197796    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:42.197807    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:42.209894    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:42.209910    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:42.252275    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:42.252286    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:42.256386    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:42.256395    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:42.291467    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:42.291478    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:42.306187    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:42.306198    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:42.320935    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:42.320946    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:42.336217    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:42.336228    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:42.353991    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:42.354002    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:42.365843    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:42.365854    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:42.377171    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:42.377182    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:42.389752    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:42.389764    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:42.401357    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:42.401371    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:44.925838    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:49.927949    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:49.928043    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:49.952663    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:49.952738    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:49.964326    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:49.964403    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:49.975370    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:49.975441    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:49.987893    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:49.987980    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:49.999631    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:49.999707    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:50.010718    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:50.010789    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:50.022193    5250 logs.go:276] 0 containers: []
	W0910 11:11:50.022204    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:50.022267    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:50.033856    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:50.033875    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:50.033881    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:50.049335    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:50.049346    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:50.088323    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:50.088335    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:50.102721    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:50.102732    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:50.120616    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:50.120627    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:50.132616    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:50.132628    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:50.157911    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:50.157924    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:50.162584    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:50.162591    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:50.177674    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:50.177685    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:50.194873    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:50.194884    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:50.206529    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:50.206543    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:50.249008    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:50.249025    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:50.263772    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:50.263783    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:50.275470    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:50.275482    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:50.287763    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:50.287776    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:50.302041    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:50.302052    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:50.314486    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:50.314498    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:52.827047    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:57.827432    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:57.827682    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:57.863836    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:57.863923    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:57.885853    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:57.885935    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:57.903243    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:57.903317    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:57.919009    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:57.919077    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:57.930221    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:57.930291    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:57.941008    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:57.941078    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:57.951315    5250 logs.go:276] 0 containers: []
	W0910 11:11:57.951327    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:57.951381    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:57.962096    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:57.962115    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:57.962122    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:57.972986    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:57.972997    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:57.995621    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:57.995630    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:58.014034    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:58.014048    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:58.027775    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:58.027786    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:58.043326    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:58.043338    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:58.060662    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:58.060675    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:58.071749    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:58.071759    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:58.085392    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:58.085402    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:58.106674    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:58.106685    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:58.121612    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:58.121624    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:58.133276    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:58.133290    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:58.148117    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:58.148128    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:58.161983    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:58.161997    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:58.204592    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:58.204603    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:58.208865    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:58.208872    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:58.243291    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:58.243301    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:00.756850    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:05.758921    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:05.759023    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:05.770802    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:05.770870    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:05.781493    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:05.781564    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:05.792609    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:05.792686    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:05.805467    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:05.805539    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:05.815749    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:05.815821    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:05.826539    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:05.826609    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:05.837720    5250 logs.go:276] 0 containers: []
	W0910 11:12:05.837731    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:05.837790    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:05.848002    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:05.848019    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:05.848025    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:05.860662    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:05.860674    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:05.873413    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:05.873423    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:05.917349    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:05.917364    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:05.930306    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:05.930317    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:05.942509    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:05.942529    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:05.953868    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:05.953877    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:05.976470    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:05.976480    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:05.994675    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:05.994689    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:06.009281    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:06.009294    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:06.023552    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:06.023563    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:06.038440    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:06.038457    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:06.059842    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:06.059858    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:06.071682    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:06.071692    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:06.076014    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:06.076021    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:06.113295    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:06.113308    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:06.128275    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:06.128286    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:08.642384    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:13.642642    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:13.642863    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:13.663990    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:13.664098    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:13.678811    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:13.678889    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:13.690732    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:13.690808    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:13.701309    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:13.701379    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:13.711452    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:13.711525    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:13.722069    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:13.722135    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:13.732396    5250 logs.go:276] 0 containers: []
	W0910 11:12:13.732408    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:13.732471    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:13.745076    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:13.745093    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:13.745099    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:13.756697    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:13.756710    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:13.768042    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:13.768054    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:13.779245    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:13.779255    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:13.821774    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:13.821783    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:13.849517    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:13.849531    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:13.864580    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:13.864590    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:13.883241    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:13.883255    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:13.925295    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:13.925308    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:13.940183    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:13.940194    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:13.955386    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:13.955400    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:13.966971    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:13.966984    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:13.979959    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:13.979970    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:14.003155    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:14.003164    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:14.007724    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:14.007731    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:14.020711    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:14.020724    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:14.034685    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:14.034696    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:16.548194    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:21.550730    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:21.550904    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:21.569129    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:21.569226    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:21.582862    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:21.582937    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:21.594496    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:21.594563    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:21.607685    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:21.607755    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:21.617997    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:21.618070    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:21.628618    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:21.628684    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:21.638882    5250 logs.go:276] 0 containers: []
	W0910 11:12:21.638897    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:21.638954    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:21.649904    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:21.649923    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:21.649929    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:21.662443    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:21.662455    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:21.674020    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:21.674031    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:21.695865    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:21.695879    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:21.713961    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:21.713972    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:21.725978    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:21.725991    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:21.741279    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:21.741290    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:21.754928    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:21.754939    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:21.772462    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:21.772472    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:21.784582    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:21.784595    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:21.806894    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:21.806901    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:21.848389    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:21.848403    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:21.852781    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:21.852790    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:21.887660    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:21.887674    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:21.902631    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:21.902641    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:21.922176    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:21.922190    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:21.937101    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:21.937111    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:24.451089    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:29.453275    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:29.453475    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:29.481464    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:29.481580    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:29.499637    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:29.499725    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:29.513031    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:29.513101    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:29.524858    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:29.524931    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:29.535549    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:29.535619    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:29.546188    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:29.546265    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:29.556425    5250 logs.go:276] 0 containers: []
	W0910 11:12:29.556451    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:29.556509    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:29.567640    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:29.567664    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:29.567670    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:29.581717    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:29.581728    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:29.593329    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:29.593339    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:29.606890    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:29.606904    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:29.620982    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:29.620993    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:29.633648    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:29.633660    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:29.671471    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:29.671481    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:29.695483    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:29.695489    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:29.707337    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:29.707348    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:29.725877    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:29.725890    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:29.738394    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:29.738408    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:29.779103    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:29.779113    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:29.790280    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:29.790293    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:29.804984    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:29.804995    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:29.820820    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:29.820832    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:29.825038    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:29.825044    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:29.859507    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:29.859518    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:32.374177    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:37.374349    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:37.374459    5250 kubeadm.go:597] duration metric: took 4m4.290381791s to restartPrimaryControlPlane
	W0910 11:12:37.374510    5250 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 11:12:37.374529    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0910 11:12:38.374760    5250 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.000242333s)
	I0910 11:12:38.374829    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 11:12:38.379969    5250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 11:12:38.382850    5250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 11:12:38.386003    5250 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 11:12:38.386012    5250 kubeadm.go:157] found existing configuration files:
	
	I0910 11:12:38.386037    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/admin.conf
	I0910 11:12:38.388771    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 11:12:38.388798    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 11:12:38.391292    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/kubelet.conf
	I0910 11:12:38.394391    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 11:12:38.394417    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 11:12:38.397577    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/controller-manager.conf
	I0910 11:12:38.400036    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 11:12:38.400056    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 11:12:38.402992    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/scheduler.conf
	I0910 11:12:38.405961    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 11:12:38.405984    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 11:12:38.408569    5250 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 11:12:38.426481    5250 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0910 11:12:38.426571    5250 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 11:12:38.474407    5250 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 11:12:38.474457    5250 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 11:12:38.474504    5250 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 11:12:38.524920    5250 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 11:12:38.529981    5250 out.go:235]   - Generating certificates and keys ...
	I0910 11:12:38.530016    5250 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 11:12:38.530051    5250 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 11:12:38.530097    5250 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 11:12:38.530129    5250 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 11:12:38.530164    5250 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 11:12:38.530189    5250 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 11:12:38.530220    5250 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 11:12:38.530265    5250 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 11:12:38.530298    5250 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 11:12:38.530339    5250 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 11:12:38.530365    5250 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 11:12:38.530391    5250 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 11:12:38.685735    5250 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 11:12:38.933938    5250 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 11:12:39.016659    5250 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 11:12:39.184718    5250 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 11:12:39.214325    5250 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 11:12:39.215544    5250 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 11:12:39.215570    5250 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 11:12:39.300912    5250 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 11:12:39.308384    5250 out.go:235]   - Booting up control plane ...
	I0910 11:12:39.308446    5250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 11:12:39.308484    5250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 11:12:39.308521    5250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 11:12:39.308572    5250 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 11:12:39.308662    5250 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 11:12:43.806224    5250 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502755 seconds
	I0910 11:12:43.806374    5250 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 11:12:43.810927    5250 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 11:12:44.324816    5250 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 11:12:44.324947    5250 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-978000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 11:12:44.830426    5250 kubeadm.go:310] [bootstrap-token] Using token: 3orerm.xyjpdf2qf6njoeux
	I0910 11:12:44.836828    5250 out.go:235]   - Configuring RBAC rules ...
	I0910 11:12:44.836900    5250 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 11:12:44.836950    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 11:12:44.841426    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 11:12:44.842290    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 11:12:44.843158    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 11:12:44.843904    5250 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 11:12:44.847148    5250 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 11:12:45.022591    5250 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 11:12:45.234977    5250 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 11:12:45.235424    5250 kubeadm.go:310] 
	I0910 11:12:45.235458    5250 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 11:12:45.235463    5250 kubeadm.go:310] 
	I0910 11:12:45.235561    5250 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 11:12:45.235594    5250 kubeadm.go:310] 
	I0910 11:12:45.235622    5250 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 11:12:45.235659    5250 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 11:12:45.235694    5250 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 11:12:45.235697    5250 kubeadm.go:310] 
	I0910 11:12:45.235729    5250 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 11:12:45.235734    5250 kubeadm.go:310] 
	I0910 11:12:45.235758    5250 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 11:12:45.235763    5250 kubeadm.go:310] 
	I0910 11:12:45.235789    5250 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 11:12:45.235834    5250 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 11:12:45.235876    5250 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 11:12:45.235883    5250 kubeadm.go:310] 
	I0910 11:12:45.235923    5250 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 11:12:45.235986    5250 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 11:12:45.235990    5250 kubeadm.go:310] 
	I0910 11:12:45.236036    5250 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3orerm.xyjpdf2qf6njoeux \
	I0910 11:12:45.236256    5250 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 \
	I0910 11:12:45.236266    5250 kubeadm.go:310] 	--control-plane 
	I0910 11:12:45.236268    5250 kubeadm.go:310] 
	I0910 11:12:45.236303    5250 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 11:12:45.236305    5250 kubeadm.go:310] 
	I0910 11:12:45.236340    5250 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3orerm.xyjpdf2qf6njoeux \
	I0910 11:12:45.236398    5250 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 
	I0910 11:12:45.236446    5250 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 11:12:45.236478    5250 cni.go:84] Creating CNI manager for ""
	I0910 11:12:45.236488    5250 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:12:45.240678    5250 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 11:12:45.247605    5250 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 11:12:45.250578    5250 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 11:12:45.255564    5250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 11:12:45.255617    5250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 11:12:45.255658    5250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-978000 minikube.k8s.io/updated_at=2024_09_10T11_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=running-upgrade-978000 minikube.k8s.io/primary=true
	I0910 11:12:45.298345    5250 kubeadm.go:1113] duration metric: took 42.763583ms to wait for elevateKubeSystemPrivileges
	I0910 11:12:45.298367    5250 ops.go:34] apiserver oom_adj: -16
	I0910 11:12:45.298374    5250 kubeadm.go:394] duration metric: took 4m12.228942208s to StartCluster
	I0910 11:12:45.298385    5250 settings.go:142] acquiring lock: {Name:mkc4479acb7c6185024679cd35acf0055f682c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:12:45.298478    5250 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:12:45.298864    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:12:45.299071    5250 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:12:45.299102    5250 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 11:12:45.299143    5250 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-978000"
	I0910 11:12:45.299156    5250 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-978000"
	W0910 11:12:45.299160    5250 addons.go:243] addon storage-provisioner should already be in state true
	I0910 11:12:45.299171    5250 host.go:66] Checking if "running-upgrade-978000" exists ...
	I0910 11:12:45.299168    5250 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-978000"
	I0910 11:12:45.299239    5250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-978000"
	I0910 11:12:45.299302    5250 config.go:182] Loaded profile config "running-upgrade-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:12:45.300648    5250 out.go:177] * Verifying Kubernetes components...
	I0910 11:12:45.301397    5250 kapi.go:59] client config for running-upgrade-978000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/client.key", CAFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ff2200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 11:12:45.306991    5250 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-978000"
	W0910 11:12:45.306996    5250 addons.go:243] addon default-storageclass should already be in state true
	I0910 11:12:45.307004    5250 host.go:66] Checking if "running-upgrade-978000" exists ...
	I0910 11:12:45.307518    5250 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 11:12:45.307523    5250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 11:12:45.307529    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	I0910 11:12:45.309549    5250 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:12:45.312662    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:12:45.316697    5250 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:12:45.316703    5250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 11:12:45.316710    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	I0910 11:12:45.404248    5250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 11:12:45.409067    5250 api_server.go:52] waiting for apiserver process to appear ...
	I0910 11:12:45.409112    5250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:12:45.412969    5250 api_server.go:72] duration metric: took 113.889584ms to wait for apiserver process to appear ...
	I0910 11:12:45.412976    5250 api_server.go:88] waiting for apiserver healthz status ...
	I0910 11:12:45.412982    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:45.448854    5250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 11:12:45.459223    5250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:12:45.781143    5250 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 11:12:45.781156    5250 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 11:12:50.414996    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:50.415030    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:55.415363    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:55.415378    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:00.415558    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:00.415596    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:05.415873    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:05.415894    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:10.416405    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:10.416455    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:15.417142    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:15.417168    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0910 11:13:15.782820    5250 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0910 11:13:15.787249    5250 out.go:177] * Enabled addons: storage-provisioner
	I0910 11:13:15.798065    5250 addons.go:510] duration metric: took 30.499767375s for enable addons: enabled=[storage-provisioner]
	I0910 11:13:20.418004    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:20.418043    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:25.419156    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:25.419221    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:30.420740    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:30.420782    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:35.422631    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:35.422655    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:40.424712    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:40.424757    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:45.426960    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:45.427049    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:45.440981    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:13:45.441056    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:45.452251    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:13:45.452331    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:45.466791    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:13:45.466871    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:45.478839    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:13:45.478909    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:45.489949    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:13:45.490026    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:45.500639    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:13:45.500710    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:45.510617    5250 logs.go:276] 0 containers: []
	W0910 11:13:45.510628    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:45.510690    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:45.521213    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:13:45.521228    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:13:45.521234    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:13:45.535067    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:13:45.535081    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:13:45.547617    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:13:45.547631    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:13:45.562434    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:13:45.562447    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:13:45.580845    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:45.580857    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:45.618825    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:45.618839    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:45.623444    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:45.623451    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:45.659303    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:13:45.659318    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:13:45.683354    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:13:45.683368    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:13:45.700573    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:13:45.700586    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:13:45.712500    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:13:45.712510    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:13:45.726361    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:45.726374    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:45.751372    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:13:45.751380    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:48.264665    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:53.267199    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:53.267407    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:53.289929    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:13:53.290047    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:53.309204    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:13:53.309287    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:53.321710    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:13:53.321784    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:53.332258    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:13:53.332331    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:53.342928    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:13:53.343001    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:53.353535    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:13:53.353606    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:53.363419    5250 logs.go:276] 0 containers: []
	W0910 11:13:53.363430    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:53.363485    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:53.374421    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:13:53.374439    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:53.374445    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:53.412707    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:53.412719    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:53.417003    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:53.417012    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:53.455895    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:13:53.455907    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:13:53.470573    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:13:53.470585    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:13:53.482727    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:13:53.482737    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:13:53.494828    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:13:53.494839    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:13:53.518127    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:13:53.518141    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:13:53.533036    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:13:53.533047    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:53.544629    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:13:53.544640    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:13:53.558517    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:13:53.558531    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:13:53.576926    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:13:53.576938    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:13:53.589150    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:53.589161    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:56.116366    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:01.118809    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:01.119032    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:01.143025    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:01.143152    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:01.160591    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:01.160676    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:01.173356    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:01.173431    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:01.184537    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:01.184608    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:01.195362    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:01.195433    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:01.205785    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:01.205862    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:01.216104    5250 logs.go:276] 0 containers: []
	W0910 11:14:01.216117    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:01.216174    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:01.226735    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:01.226752    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:01.226760    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:01.231514    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:01.231522    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:01.245382    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:01.245396    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:01.264933    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:01.264944    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:01.279064    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:01.279077    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:01.290837    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:01.290851    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:01.302685    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:01.302696    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:01.340703    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:01.340714    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:01.354749    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:01.354760    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:01.367038    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:01.367050    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:01.384323    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:01.384336    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:01.408791    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:01.408799    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:01.420335    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:01.420345    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:03.955068    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:08.956067    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:08.956367    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:08.985539    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:08.985671    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:09.003305    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:09.003393    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:09.016873    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:09.016953    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:09.028422    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:09.028491    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:09.038561    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:09.038637    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:09.048687    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:09.048763    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:09.058646    5250 logs.go:276] 0 containers: []
	W0910 11:14:09.058658    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:09.058719    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:09.069511    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:09.069527    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:09.069532    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:09.081578    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:09.081589    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:09.095875    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:09.095888    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:09.121140    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:09.121150    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:09.133005    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:09.133022    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:09.145015    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:09.145026    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:09.185395    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:09.185408    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:09.190212    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:09.190218    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:09.227714    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:09.227726    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:09.240238    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:09.240250    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:09.264170    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:09.264180    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:09.282360    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:09.282372    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:09.296111    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:09.296121    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:11.809660    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:16.811786    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:16.811895    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:16.827324    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:16.827397    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:16.837531    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:16.837604    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:16.847901    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:16.847973    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:16.858514    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:16.858579    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:16.868953    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:16.869025    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:16.880441    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:16.880516    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:16.890527    5250 logs.go:276] 0 containers: []
	W0910 11:14:16.890539    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:16.890592    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:16.901233    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:16.901248    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:16.901254    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:16.913093    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:16.913105    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:16.951163    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:16.951176    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:16.965897    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:16.965910    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:16.979980    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:16.979993    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:16.992055    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:16.992067    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:17.004202    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:17.004214    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:17.019519    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:17.019528    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:17.037003    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:17.037012    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:17.062143    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:17.062152    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:17.066810    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:17.066818    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:17.102169    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:17.102184    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:17.118420    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:17.118431    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:19.631556    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:24.634030    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:24.634220    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:24.655532    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:24.655628    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:24.671537    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:24.671612    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:24.683122    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:24.683191    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:24.693500    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:24.693569    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:24.704341    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:24.704406    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:24.715226    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:24.715286    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:24.725681    5250 logs.go:276] 0 containers: []
	W0910 11:14:24.725693    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:24.725752    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:24.737526    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:24.737549    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:24.737557    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:24.742453    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:24.742461    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:24.757215    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:24.757226    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:24.769224    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:24.769233    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:24.797638    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:24.797648    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:24.814394    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:24.814406    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:24.826309    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:24.826324    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:24.838030    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:24.838041    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:24.860481    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:24.860491    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:24.898453    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:24.898462    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:24.932604    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:24.932615    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:24.947428    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:24.947439    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:24.964179    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:24.964191    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:27.477791    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:32.479019    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:32.479523    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:32.516842    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:32.516972    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:32.539348    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:32.539448    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:32.553846    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:32.553922    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:32.566092    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:32.566161    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:32.577394    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:32.577471    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:32.587993    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:32.588056    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:32.598111    5250 logs.go:276] 0 containers: []
	W0910 11:14:32.598121    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:32.598174    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:32.609883    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:32.609900    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:32.609905    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:32.624577    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:32.624587    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:32.636134    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:32.636149    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:32.656320    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:32.656331    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:32.668238    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:32.668247    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:32.685694    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:32.685705    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:32.724499    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:32.724509    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:32.728625    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:32.728631    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:32.740922    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:32.740937    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:32.752782    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:32.752796    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:32.779015    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:32.779033    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:32.790841    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:32.790852    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:32.849384    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:32.849397    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:35.372057    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:40.374280    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:40.374516    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:40.401146    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:40.401267    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:40.417199    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:40.417275    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:40.435479    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:40.435557    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:40.446493    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:40.446566    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:40.460924    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:40.460991    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:40.471477    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:40.471548    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:40.481275    5250 logs.go:276] 0 containers: []
	W0910 11:14:40.481288    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:40.481349    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:40.492081    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:40.492096    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:40.492102    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:40.509775    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:40.509788    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:40.533249    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:40.533258    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:40.538954    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:40.538962    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:40.553358    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:40.553372    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:40.567584    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:40.567594    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:40.581414    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:40.581423    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:40.593148    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:40.593161    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:40.611699    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:40.611712    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:40.624000    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:40.624010    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:40.635323    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:40.635333    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:40.672650    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:40.672658    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:40.707373    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:40.707385    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:43.221977    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:48.224147    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:48.224333    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:48.238901    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:48.238980    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:48.250189    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:48.250256    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:48.260695    5250 logs.go:276] 3 containers: [de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:48.260768    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:48.271011    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:48.271087    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:48.281507    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:48.281576    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:48.291719    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:48.291798    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:48.302690    5250 logs.go:276] 0 containers: []
	W0910 11:14:48.302703    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:48.302767    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:48.313161    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:48.313176    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:48.313183    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:48.327084    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:48.327098    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:48.339167    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:48.339180    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:48.350282    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:48.350293    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:48.387827    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:48.387841    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:48.399995    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:48.400005    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:48.411554    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:48.411564    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:48.432766    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:48.432780    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:48.458739    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:48.458747    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:48.495556    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:14:48.495567    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:14:48.507445    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:48.507458    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:48.520906    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:48.520921    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:48.535727    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:48.535737    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:48.540600    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:48.540609    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:51.056302    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:56.058762    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:56.059088    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:56.094734    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:56.094870    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:56.113991    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:56.114077    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:56.128123    5250 logs.go:276] 3 containers: [de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:56.128206    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:56.139582    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:56.139665    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:56.150100    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:56.150165    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:56.160675    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:56.160747    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:56.171068    5250 logs.go:276] 0 containers: []
	W0910 11:14:56.171079    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:56.171142    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:56.181250    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:56.181272    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:56.181278    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:56.195748    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:56.195759    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:56.209725    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:56.209736    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:56.234953    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:56.234962    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:56.246842    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:56.246852    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:56.264941    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:56.264952    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:56.270977    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:56.270992    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:56.307731    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:56.307743    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:56.322341    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:56.322351    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:56.340080    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:56.340089    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:56.351717    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:56.351728    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:56.364910    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:56.364922    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:56.405175    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:56.405185    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:56.420224    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:14:56.420233    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:14:58.937222    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:03.939464    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:03.939683    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:03.964067    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:03.964173    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:03.981020    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:03.981108    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:03.994395    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:03.994479    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:04.006232    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:04.006298    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:04.017440    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:04.017516    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:04.028017    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:04.028090    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:04.037952    5250 logs.go:276] 0 containers: []
	W0910 11:15:04.037963    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:04.038021    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:04.048604    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:04.048621    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:04.048627    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:04.063328    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:04.063343    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:04.074607    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:04.074617    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:04.110589    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:04.110600    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:04.125315    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:04.125328    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:04.136973    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:04.136983    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:04.149542    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:04.149553    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:04.167834    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:04.167844    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:04.184487    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:04.184497    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:04.189552    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:04.189559    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:04.203764    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:04.203774    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:04.215870    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:04.215880    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:04.228018    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:04.228029    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:04.268080    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:04.268088    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:04.292676    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:04.292686    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:06.807502    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:11.809758    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:11.810152    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:11.838862    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:11.838984    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:11.856644    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:11.856727    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:11.870491    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:11.870569    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:11.882752    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:11.882826    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:11.893398    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:11.893466    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:11.904886    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:11.904959    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:11.918855    5250 logs.go:276] 0 containers: []
	W0910 11:15:11.918865    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:11.918919    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:11.929575    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:11.929591    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:11.929596    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:11.967393    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:11.967404    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:11.982163    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:11.982177    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:11.996223    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:11.996236    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:12.008926    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:12.008937    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:12.023635    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:12.023646    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:12.048312    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:12.048329    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:12.060771    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:12.060783    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:12.100967    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:12.100980    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:12.116962    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:12.116975    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:12.129924    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:12.129936    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:12.145318    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:12.145330    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:12.150201    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:12.150208    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:12.161951    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:12.161964    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:12.174783    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:12.174798    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:14.699430    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:19.701831    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:19.702214    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:19.738859    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:19.739014    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:19.759675    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:19.759766    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:19.774763    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:19.774848    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:19.787232    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:19.787297    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:19.797913    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:19.797981    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:19.808441    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:19.808504    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:19.819116    5250 logs.go:276] 0 containers: []
	W0910 11:15:19.819127    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:19.819190    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:19.829711    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:19.829729    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:19.829735    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:19.847175    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:19.847186    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:19.883189    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:19.883203    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:19.899564    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:19.899576    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:19.915174    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:19.915189    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:19.927495    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:19.927509    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:19.944392    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:19.944406    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:19.961082    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:19.961093    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:20.001022    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:20.001031    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:20.012422    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:20.012433    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:20.026989    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:20.027002    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:20.050746    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:20.050754    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:20.054900    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:20.054910    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:20.066310    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:20.066332    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:20.079550    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:20.079562    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:22.594092    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:27.596306    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:27.596444    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:27.609310    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:27.609394    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:27.620963    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:27.621031    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:27.631704    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:27.631784    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:27.642243    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:27.642314    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:27.653241    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:27.653313    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:27.664100    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:27.664168    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:27.674798    5250 logs.go:276] 0 containers: []
	W0910 11:15:27.674809    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:27.674864    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:27.689553    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:27.689569    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:27.689574    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:27.702776    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:27.702790    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:27.717004    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:27.717017    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:27.732969    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:27.732983    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:27.750378    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:27.750387    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:27.787772    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:27.787780    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:27.799155    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:27.799165    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:27.811232    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:27.811244    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:27.825893    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:27.825906    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:27.839585    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:27.839595    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:27.863139    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:27.863149    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:27.875004    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:27.875017    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:27.879175    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:27.879183    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:27.913570    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:27.913580    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:27.934927    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:27.934941    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:30.451414    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:35.452002    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:35.452147    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:35.471239    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:35.471320    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:35.483052    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:35.483128    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:35.493852    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:35.493922    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:35.504174    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:35.504245    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:35.515230    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:35.515312    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:35.526056    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:35.526117    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:35.540368    5250 logs.go:276] 0 containers: []
	W0910 11:15:35.540380    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:35.540432    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:35.551231    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:35.551248    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:35.551254    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:35.589440    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:35.589451    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:35.593650    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:35.593659    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:35.605735    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:35.605750    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:35.630650    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:35.630660    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:35.641996    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:35.642010    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:35.662334    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:35.662346    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:35.676732    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:35.676746    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:35.690732    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:35.690747    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:35.705518    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:35.705528    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:35.740830    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:35.740844    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:35.753294    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:35.753316    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:35.765183    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:35.765194    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:35.776984    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:35.776998    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:35.794991    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:35.795001    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:38.308848    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:43.311071    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:43.311221    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:43.322562    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:43.322644    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:43.332952    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:43.333022    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:43.343761    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:43.343835    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:43.358767    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:43.358843    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:43.369009    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:43.369077    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:43.379637    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:43.379705    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:43.390173    5250 logs.go:276] 0 containers: []
	W0910 11:15:43.390187    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:43.390248    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:43.401070    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:43.401091    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:43.401098    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:43.418938    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:43.418951    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:43.430821    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:43.430835    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:43.442877    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:43.442890    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:43.484295    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:43.484310    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:43.499315    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:43.499326    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:43.511837    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:43.511850    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:43.529279    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:43.529295    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:43.543951    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:43.543963    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:43.555906    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:43.555921    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:43.571524    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:43.571536    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:43.595767    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:43.595778    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:43.600163    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:43.600172    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:43.636725    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:43.636738    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:43.651810    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:43.651827    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:46.165585    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:51.167768    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:51.167870    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:51.179186    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:51.179262    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:51.189723    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:51.189799    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:51.201579    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:51.201651    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:51.212306    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:51.212370    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:51.222943    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:51.223016    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:51.233652    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:51.233719    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:51.244118    5250 logs.go:276] 0 containers: []
	W0910 11:15:51.244131    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:51.244197    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:51.255468    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:51.255487    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:51.255494    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:51.267753    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:51.267765    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:51.282440    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:51.282452    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:51.298361    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:51.298373    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:51.324378    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:51.324390    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:51.336850    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:51.336863    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:51.374728    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:51.374739    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:51.391276    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:51.391287    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:51.408286    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:51.408297    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:51.420207    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:51.420217    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:51.425472    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:51.425478    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:51.438661    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:51.438672    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:51.455743    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:51.455752    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:51.467628    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:51.467640    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:51.485826    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:51.485837    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:54.026695    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:59.028789    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:59.028897    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:59.040381    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:59.040456    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:59.050586    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:59.050658    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:59.061033    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:59.061097    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:59.075062    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:59.075137    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:59.085659    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:59.085731    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:59.097020    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:59.097091    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:59.108225    5250 logs.go:276] 0 containers: []
	W0910 11:15:59.108237    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:59.108297    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:59.119979    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:59.119997    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:59.120002    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:59.133708    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:59.133721    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:59.151501    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:59.151515    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:59.168670    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:59.168681    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:59.183216    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:59.183227    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:59.195066    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:59.195081    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:59.213969    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:59.213980    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:59.232525    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:59.232538    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:59.256496    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:59.256508    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:59.268492    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:59.268504    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:59.304626    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:59.304638    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:59.318175    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:59.318190    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:59.336027    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:59.336043    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:59.351457    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:59.351468    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:59.389582    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:59.389595    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:01.895226    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:06.897537    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:06.898096    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:06.929049    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:06.929180    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:06.948261    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:06.948347    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:06.963405    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:06.963487    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:06.974928    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:06.974997    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:06.986955    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:06.987025    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:06.997567    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:06.997639    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:07.009180    5250 logs.go:276] 0 containers: []
	W0910 11:16:07.009191    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:07.009249    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:07.019900    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:07.019920    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:07.019925    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:07.031383    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:07.031394    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:07.046964    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:07.046974    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:07.058924    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:07.058935    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:07.063597    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:07.063606    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:07.077228    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:07.077237    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:07.088966    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:07.088976    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:07.100822    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:07.100832    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:07.120871    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:07.120882    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:07.155505    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:07.155517    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:07.170352    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:07.170362    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:07.184325    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:07.184336    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:07.210296    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:07.210307    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:07.250507    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:07.250522    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:07.262968    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:07.262979    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:09.775225    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:14.776244    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:14.776492    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:14.800484    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:14.800603    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:14.817266    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:14.817346    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:14.829923    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:14.829997    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:14.844725    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:14.844816    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:14.855751    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:14.855838    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:14.867347    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:14.867417    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:14.877858    5250 logs.go:276] 0 containers: []
	W0910 11:16:14.877873    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:14.877934    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:14.888703    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:14.888721    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:14.888727    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:14.893439    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:14.893446    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:14.905836    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:14.905851    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:14.921045    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:14.921058    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:14.934396    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:14.934409    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:14.957906    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:14.957919    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:14.995881    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:14.995891    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:15.007870    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:15.007880    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:15.029703    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:15.029714    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:15.041571    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:15.041582    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:15.053570    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:15.053585    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:15.089251    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:15.089262    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:15.104569    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:15.104581    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:15.123952    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:15.123967    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:15.135703    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:15.135713    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:17.649467    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:22.651612    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:22.651753    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:22.663978    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:22.664048    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:22.674862    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:22.674940    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:22.685327    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:22.685394    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:22.695707    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:22.695778    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:22.705515    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:22.705587    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:22.717035    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:22.717099    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:22.727801    5250 logs.go:276] 0 containers: []
	W0910 11:16:22.727813    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:22.727871    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:22.737895    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:22.737916    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:22.737922    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:22.752291    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:22.752302    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:22.767207    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:22.767218    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:22.779518    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:22.779532    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:22.790889    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:22.790900    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:22.795346    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:22.795355    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:22.833499    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:22.833514    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:22.845130    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:22.845141    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:22.856184    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:22.856197    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:22.872829    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:22.872841    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:22.897066    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:22.897077    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:22.908879    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:22.908892    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:22.947881    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:22.947898    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:22.962252    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:22.962265    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:22.977322    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:22.977337    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:25.495795    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:30.497247    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:30.497340    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:30.508792    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:30.508873    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:30.520209    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:30.520277    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:30.530375    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:30.530439    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:30.542990    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:30.543067    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:30.556839    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:30.556907    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:30.567349    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:30.567418    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:30.578286    5250 logs.go:276] 0 containers: []
	W0910 11:16:30.578298    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:30.578361    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:30.595627    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:30.595648    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:30.595654    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:30.607600    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:30.607612    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:30.620156    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:30.620169    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:30.632001    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:30.632010    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:30.636321    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:30.636330    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:30.651770    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:30.651783    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:30.663502    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:30.663514    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:30.681217    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:30.681228    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:30.703102    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:30.703111    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:30.714936    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:30.714949    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:30.754120    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:30.754128    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:30.789172    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:30.789186    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:30.800932    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:30.800945    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:30.812814    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:30.812824    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:30.831209    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:30.831223    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:33.357230    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:38.359458    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:38.359860    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:38.400041    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:38.400189    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:38.428078    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:38.428157    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:38.442010    5250 logs.go:276] 4 containers: [82e428ee9c3d fe45ed23e090 de0d9e14794e 7e18ed854af8]
	I0910 11:16:38.442090    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:38.453645    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:38.453716    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:38.464549    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:38.464621    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:38.475376    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:38.475450    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:38.488263    5250 logs.go:276] 0 containers: []
	W0910 11:16:38.488275    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:38.488336    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:38.500988    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:38.501005    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:38.501010    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:38.512498    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:38.512507    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:38.553182    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:38.553194    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:38.567445    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:38.567458    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:38.581585    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:38.581597    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:38.593595    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:38.593605    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:38.606163    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:38.606174    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:38.630771    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:38.630780    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:38.654784    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:38.654792    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:38.666857    5250 logs.go:123] Gathering logs for coredns [82e428ee9c3d] ...
	I0910 11:16:38.666867    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e428ee9c3d"
	I0910 11:16:38.679461    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:38.679476    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:38.691737    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:38.691751    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:38.730513    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:38.730521    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:38.734995    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:38.735001    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:38.749030    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:38.749040    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:41.265998    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:46.268133    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:46.272031    5250 out.go:201] 
	W0910 11:16:46.276617    5250 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0910 11:16:46.276633    5250 out.go:270] * 
	* 
	W0910 11:16:46.277605    5250 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:16:46.292727    5250 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-978000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-09-10 11:16:46.41364 -0700 PDT m=+2905.892150710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-978000 -n running-upgrade-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-978000 -n running-upgrade-978000: exit status 2 (15.568712041s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-978000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-278000          | force-systemd-flag-278000 | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-177000              | force-systemd-env-177000  | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-177000           | force-systemd-env-177000  | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT | 10 Sep 24 11:06 PDT |
	| start   | -p docker-flags-081000                | docker-flags-081000       | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-278000             | force-systemd-flag-278000 | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-278000          | force-systemd-flag-278000 | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT | 10 Sep 24 11:06 PDT |
	| start   | -p cert-expiration-717000             | cert-expiration-717000    | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-081000 ssh               | docker-flags-081000       | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-081000 ssh               | docker-flags-081000       | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-081000                | docker-flags-081000       | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT | 10 Sep 24 11:06 PDT |
	| start   | -p cert-options-070000                | cert-options-070000       | jenkins | v1.34.0 | 10 Sep 24 11:06 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-070000 ssh               | cert-options-070000       | jenkins | v1.34.0 | 10 Sep 24 11:07 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-070000 -- sudo        | cert-options-070000       | jenkins | v1.34.0 | 10 Sep 24 11:07 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-070000                | cert-options-070000       | jenkins | v1.34.0 | 10 Sep 24 11:07 PDT | 10 Sep 24 11:07 PDT |
	| start   | -p running-upgrade-978000             | minikube                  | jenkins | v1.26.0 | 10 Sep 24 11:07 PDT | 10 Sep 24 11:08 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-978000             | running-upgrade-978000    | jenkins | v1.34.0 | 10 Sep 24 11:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-717000             | cert-expiration-717000    | jenkins | v1.34.0 | 10 Sep 24 11:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-717000             | cert-expiration-717000    | jenkins | v1.34.0 | 10 Sep 24 11:10 PDT | 10 Sep 24 11:10 PDT |
	| start   | -p kubernetes-upgrade-590000          | kubernetes-upgrade-590000 | jenkins | v1.34.0 | 10 Sep 24 11:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-590000          | kubernetes-upgrade-590000 | jenkins | v1.34.0 | 10 Sep 24 11:10 PDT | 10 Sep 24 11:10 PDT |
	| start   | -p kubernetes-upgrade-590000          | kubernetes-upgrade-590000 | jenkins | v1.34.0 | 10 Sep 24 11:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-590000          | kubernetes-upgrade-590000 | jenkins | v1.34.0 | 10 Sep 24 11:10 PDT | 10 Sep 24 11:10 PDT |
	| start   | -p stopped-upgrade-163000             | minikube                  | jenkins | v1.26.0 | 10 Sep 24 11:10 PDT | 10 Sep 24 11:11 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-163000 stop           | minikube                  | jenkins | v1.26.0 | 10 Sep 24 11:11 PDT | 10 Sep 24 11:11 PDT |
	| start   | -p stopped-upgrade-163000             | stopped-upgrade-163000    | jenkins | v1.34.0 | 10 Sep 24 11:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 11:11:16
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 11:11:16.627101    5456 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:11:16.627274    5456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:11:16.627278    5456 out.go:358] Setting ErrFile to fd 2...
	I0910 11:11:16.627281    5456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:11:16.627446    5456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:11:16.628827    5456 out.go:352] Setting JSON to false
	I0910 11:11:16.648337    5456 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4240,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:11:16.648404    5456 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:11:16.652863    5456 out.go:177] * [stopped-upgrade-163000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:11:16.660905    5456 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:11:16.660949    5456 notify.go:220] Checking for updates...
	I0910 11:11:16.667827    5456 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:11:16.669284    5456 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:11:16.672775    5456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:11:16.675851    5456 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:11:16.678832    5456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:11:16.682069    5456 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:11:16.685804    5456 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 11:11:16.688982    5456 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:11:16.693762    5456 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:11:16.700845    5456 start.go:297] selected driver: qemu2
	I0910 11:11:16.700851    5456 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:11:16.700901    5456 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:11:16.703754    5456 cni.go:84] Creating CNI manager for ""
	I0910 11:11:16.703780    5456 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:11:16.703810    5456 start.go:340] cluster config:
	{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:11:16.703858    5456 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:11:16.710854    5456 out.go:177] * Starting "stopped-upgrade-163000" primary control-plane node in "stopped-upgrade-163000" cluster
	I0910 11:11:16.713670    5456 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0910 11:11:16.713687    5456 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0910 11:11:16.713695    5456 cache.go:56] Caching tarball of preloaded images
	I0910 11:11:16.713755    5456 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:11:16.713761    5456 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0910 11:11:16.713815    5456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0910 11:11:16.714325    5456 start.go:360] acquireMachinesLock for stopped-upgrade-163000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:11:16.714363    5456 start.go:364] duration metric: took 31.084µs to acquireMachinesLock for "stopped-upgrade-163000"
	I0910 11:11:16.714375    5456 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:11:16.714382    5456 fix.go:54] fixHost starting: 
	I0910 11:11:16.714500    5456 fix.go:112] recreateIfNeeded on stopped-upgrade-163000: state=Stopped err=<nil>
	W0910 11:11:16.714509    5456 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:11:16.722696    5456 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-163000" ...
	I0910 11:11:13.338355    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:16.726812    5456 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:11:16.726904    5456 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50494-:22,hostfwd=tcp::50495-:2376,hostname=stopped-upgrade-163000 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/disk.qcow2
	I0910 11:11:16.774396    5456 main.go:141] libmachine: STDOUT: 
	I0910 11:11:16.774424    5456 main.go:141] libmachine: STDERR: 
	I0910 11:11:16.774430    5456 main.go:141] libmachine: Waiting for VM to start (ssh -p 50494 docker@127.0.0.1)...
	I0910 11:11:18.339888    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:18.340346    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:18.382113    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:18.382250    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:18.403910    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:18.404002    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:18.418916    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:18.418996    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:18.431671    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:18.431748    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:18.442983    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:18.443043    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:18.453798    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:18.453864    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:18.463901    5250 logs.go:276] 0 containers: []
	W0910 11:11:18.463911    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:18.463971    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:18.474455    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:18.474472    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:18.474477    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:18.486473    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:18.486483    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:18.497962    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:18.497972    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:18.538607    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:18.538618    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:18.572761    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:18.572775    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:18.586550    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:18.586561    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:18.603798    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:18.603812    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:18.615501    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:18.615516    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:18.629307    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:18.629320    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:18.644751    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:18.644767    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:18.664579    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:18.664592    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:18.676029    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:18.676040    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:18.680138    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:18.680144    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:18.695027    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:18.695039    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:18.707126    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:18.707134    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:18.721190    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:18.721203    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:18.732951    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:18.732962    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:21.259477    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:26.262129    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:26.262293    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:26.275211    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:26.275278    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:26.288496    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:26.288565    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:26.302285    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:26.302354    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:26.313774    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:26.313842    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:26.328608    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:26.328677    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:26.340000    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:26.340074    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:26.350628    5250 logs.go:276] 0 containers: []
	W0910 11:11:26.350639    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:26.350697    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:26.361651    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:26.361669    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:26.361675    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:26.404901    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:26.404908    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:26.419964    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:26.419975    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:26.432020    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:26.432035    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:26.448501    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:26.448512    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:26.462451    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:26.462463    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:26.467400    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:26.467407    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:26.481303    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:26.481316    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:26.495607    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:26.495616    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:26.513448    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:26.513460    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:26.538953    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:26.538961    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:26.551721    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:26.551729    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:26.563095    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:26.563105    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:26.577623    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:26.577634    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:26.589291    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:26.589302    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:26.626090    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:26.626101    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:26.640049    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:26.640062    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:29.154380    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:34.156567    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:34.156809    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:34.178439    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:34.178531    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:34.192015    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:34.192099    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:34.203509    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:34.203580    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:34.214189    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:34.214274    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:34.232618    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:34.232689    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:34.243000    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:34.243070    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:34.252740    5250 logs.go:276] 0 containers: []
	W0910 11:11:34.252751    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:34.252810    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:34.262914    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:34.262933    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:34.262939    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:34.281974    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:34.281988    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:34.293597    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:34.293611    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:34.307434    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:34.307447    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:34.322214    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:34.322228    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:34.333268    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:34.333282    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:34.344339    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:34.344349    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:34.362013    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:34.362026    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:34.404561    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:34.404573    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:34.409027    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:34.409033    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:34.447002    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:34.447016    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:34.459149    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:34.459161    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:34.472713    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:34.472727    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:34.484217    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:34.484231    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:34.495613    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:34.495625    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:34.518629    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:34.518638    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:34.532616    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:34.532627    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:37.047090    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:36.854398    5456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0910 11:11:36.855189    5456 machine.go:93] provisionDockerMachine start ...
	I0910 11:11:36.855355    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:36.855878    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:36.855902    5456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 11:11:36.944734    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 11:11:36.944769    5456 buildroot.go:166] provisioning hostname "stopped-upgrade-163000"
	I0910 11:11:36.944907    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:36.945175    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:36.945186    5456 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-163000 && echo "stopped-upgrade-163000" | sudo tee /etc/hostname
	I0910 11:11:37.025422    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-163000
	
	I0910 11:11:37.025502    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.025664    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.025675    5456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-163000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-163000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-163000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 11:11:37.096436    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 11:11:37.096448    5456 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19598-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19598-1276/.minikube}
	I0910 11:11:37.096461    5456 buildroot.go:174] setting up certificates
	I0910 11:11:37.096466    5456 provision.go:84] configureAuth start
	I0910 11:11:37.096470    5456 provision.go:143] copyHostCerts
	I0910 11:11:37.096552    5456 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem, removing ...
	I0910 11:11:37.096560    5456 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem
	I0910 11:11:37.096671    5456 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem (1123 bytes)
	I0910 11:11:37.096853    5456 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem, removing ...
	I0910 11:11:37.096858    5456 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem
	I0910 11:11:37.096909    5456 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem (1675 bytes)
	I0910 11:11:37.097016    5456 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem, removing ...
	I0910 11:11:37.097022    5456 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem
	I0910 11:11:37.097073    5456 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem (1078 bytes)
	I0910 11:11:37.097159    5456 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-163000 san=[127.0.0.1 localhost minikube stopped-upgrade-163000]
	I0910 11:11:37.168755    5456 provision.go:177] copyRemoteCerts
	I0910 11:11:37.168796    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 11:11:37.168803    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:11:37.204235    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 11:11:37.210841    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 11:11:37.217496    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 11:11:37.224716    5456 provision.go:87] duration metric: took 128.248916ms to configureAuth
	I0910 11:11:37.224725    5456 buildroot.go:189] setting minikube options for container-runtime
	I0910 11:11:37.224829    5456 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:11:37.224867    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.224953    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.224963    5456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 11:11:37.291752    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 11:11:37.291762    5456 buildroot.go:70] root file system type: tmpfs
	I0910 11:11:37.291817    5456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 11:11:37.291867    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.291997    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.292031    5456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 11:11:37.362722    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 11:11:37.362782    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.362904    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.362914    5456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 11:11:37.704376    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 11:11:37.704393    5456 machine.go:96] duration metric: took 849.217208ms to provisionDockerMachine
	I0910 11:11:37.704400    5456 start.go:293] postStartSetup for "stopped-upgrade-163000" (driver="qemu2")
	I0910 11:11:37.704407    5456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 11:11:37.704486    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 11:11:37.704498    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:11:37.740433    5456 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 11:11:37.741792    5456 info.go:137] Remote host: Buildroot 2021.02.12
	I0910 11:11:37.741799    5456 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/addons for local assets ...
	I0910 11:11:37.741885    5456 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/files for local assets ...
	I0910 11:11:37.742009    5456 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem -> 17952.pem in /etc/ssl/certs
	I0910 11:11:37.742137    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 11:11:37.745240    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem --> /etc/ssl/certs/17952.pem (1708 bytes)
	I0910 11:11:37.752673    5456 start.go:296] duration metric: took 48.2695ms for postStartSetup
	I0910 11:11:37.752686    5456 fix.go:56] duration metric: took 21.0388655s for fixHost
	I0910 11:11:37.752719    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.752817    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.752827    5456 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 11:11:37.818698    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991898.281001712
	
	I0910 11:11:37.818705    5456 fix.go:216] guest clock: 1725991898.281001712
	I0910 11:11:37.818709    5456 fix.go:229] Guest: 2024-09-10 11:11:38.281001712 -0700 PDT Remote: 2024-09-10 11:11:37.752688 -0700 PDT m=+21.157374376 (delta=528.313712ms)
	I0910 11:11:37.818720    5456 fix.go:200] guest clock delta is within tolerance: 528.313712ms
	I0910 11:11:37.818723    5456 start.go:83] releasing machines lock for "stopped-upgrade-163000", held for 21.104912084s
	I0910 11:11:37.818786    5456 ssh_runner.go:195] Run: cat /version.json
	I0910 11:11:37.818800    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:11:37.818787    5456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 11:11:37.818857    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	W0910 11:11:37.819408    5456 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50494: connect: connection refused
	I0910 11:11:37.819433    5456 retry.go:31] will retry after 222.187113ms: dial tcp [::1]:50494: connect: connection refused
	W0910 11:11:37.851656    5456 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0910 11:11:37.851712    5456 ssh_runner.go:195] Run: systemctl --version
	I0910 11:11:37.853579    5456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 11:11:37.855332    5456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 11:11:37.855360    5456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0910 11:11:37.858243    5456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0910 11:11:37.863050    5456 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 11:11:37.863059    5456 start.go:495] detecting cgroup driver to use...
	I0910 11:11:37.863130    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 11:11:37.869477    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0910 11:11:37.872729    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 11:11:37.876116    5456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 11:11:37.876142    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 11:11:37.879139    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 11:11:37.882202    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 11:11:37.885361    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 11:11:37.888759    5456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 11:11:37.891932    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 11:11:37.894632    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 11:11:37.897716    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 11:11:37.901103    5456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 11:11:37.903909    5456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 11:11:37.906409    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:37.968197    5456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 11:11:37.973819    5456 start.go:495] detecting cgroup driver to use...
	I0910 11:11:37.973871    5456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 11:11:37.980775    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 11:11:37.986024    5456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 11:11:37.992729    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 11:11:37.997500    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 11:11:38.002356    5456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 11:11:38.033204    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 11:11:38.037985    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 11:11:38.043331    5456 ssh_runner.go:195] Run: which cri-dockerd
	I0910 11:11:38.044567    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 11:11:38.047437    5456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0910 11:11:38.054273    5456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 11:11:38.121029    5456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 11:11:38.338760    5456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 11:11:38.338862    5456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 11:11:38.350097    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:38.427403    5456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 11:11:39.535328    5456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.107937875s)
	I0910 11:11:39.535385    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 11:11:39.539887    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 11:11:39.544184    5456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 11:11:39.608181    5456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 11:11:39.667055    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:39.736586    5456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 11:11:39.742119    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 11:11:39.747019    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:39.807101    5456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 11:11:39.843789    5456 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 11:11:39.843865    5456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 11:11:39.846101    5456 start.go:563] Will wait 60s for crictl version
	I0910 11:11:39.846158    5456 ssh_runner.go:195] Run: which crictl
	I0910 11:11:39.847700    5456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 11:11:39.862093    5456 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0910 11:11:39.862178    5456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 11:11:39.882341    5456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 11:11:39.901413    5456 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0910 11:11:39.901484    5456 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0910 11:11:39.902734    5456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 11:11:39.906093    5456 kubeadm.go:883] updating cluster {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0910 11:11:39.906141    5456 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0910 11:11:39.906179    5456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 11:11:39.916950    5456 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 11:11:39.916959    5456 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0910 11:11:39.917013    5456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 11:11:39.920661    5456 ssh_runner.go:195] Run: which lz4
	I0910 11:11:39.921978    5456 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 11:11:39.923331    5456 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 11:11:39.923340    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0910 11:11:40.805952    5456 docker.go:649] duration metric: took 884.028041ms to copy over tarball
	I0910 11:11:40.806011    5456 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 11:11:42.049150    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:42.049248    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:42.060901    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:42.060978    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:42.071771    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:42.071851    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:42.082424    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:42.082499    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:42.092741    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:42.092812    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:42.103443    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:42.103511    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:42.114087    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:42.114165    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:42.124990    5250 logs.go:276] 0 containers: []
	W0910 11:11:42.124999    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:42.125052    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:42.135459    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:42.135477    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:42.135483    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:42.150008    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:42.150018    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:42.167469    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:42.167484    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:42.181478    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:42.181489    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:42.197796    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:42.197807    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:42.209894    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:42.209910    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:41.965545    5456 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159550792s)
	I0910 11:11:41.965559    5456 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 11:11:41.981558    5456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 11:11:41.984883    5456 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0910 11:11:41.989972    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:42.056204    5456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 11:11:43.603339    5456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.547159292s)
	I0910 11:11:43.603444    5456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 11:11:43.614093    5456 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 11:11:43.614103    5456 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0910 11:11:43.614108    5456 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 11:11:43.618418    5456 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:43.619874    5456 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:43.622378    5456 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0910 11:11:43.622423    5456 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:43.624016    5456 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:43.624287    5456 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:43.625228    5456 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:43.625495    5456 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0910 11:11:43.626361    5456 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:43.627279    5456 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:43.628166    5456 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:43.628560    5456 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:43.629678    5456 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:43.629685    5456 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:43.630846    5456 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:43.631925    5456 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.524394    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0910 11:11:44.548278    5456 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0910 11:11:44.548322    5456 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0910 11:11:44.548404    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0910 11:11:44.563184    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:44.565086    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0910 11:11:44.565205    5456 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0910 11:11:44.576315    5456 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0910 11:11:44.576321    5456 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0910 11:11:44.576340    5456 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:44.576358    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0910 11:11:44.576389    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:44.592424    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:44.593149    5456 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0910 11:11:44.593166    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0910 11:11:44.593589    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0910 11:11:44.622615    5456 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0910 11:11:44.622624    5456 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0910 11:11:44.622637    5456 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:44.622688    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0910 11:11:44.631604    5456 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0910 11:11:44.631735    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:44.632320    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0910 11:11:44.641793    5456 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0910 11:11:44.641814    5456 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:44.641868    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:44.651647    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0910 11:11:44.651766    5456 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0910 11:11:44.653200    5456 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0910 11:11:44.653215    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0910 11:11:44.695055    5456 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0910 11:11:44.695069    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0910 11:11:44.730974    5456 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0910 11:11:44.748662    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:44.758442    5456 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0910 11:11:44.758461    5456 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:44.758518    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:44.761309    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:44.770277    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.773634    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0910 11:11:44.773664    5456 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0910 11:11:44.773678    5456 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:44.773710    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:44.786866    5456 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0910 11:11:44.786885    5456 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.786915    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0910 11:11:44.786942    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.797328    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0910 11:11:44.836489    5456 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0910 11:11:44.836574    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:44.848123    5456 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0910 11:11:44.848143    5456 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:44.848198    5456 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:44.862632    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 11:11:44.862757    5456 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 11:11:44.864148    5456 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0910 11:11:44.864165    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0910 11:11:44.893641    5456 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 11:11:44.893653    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0910 11:11:45.135399    5456 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 11:11:45.135434    5456 cache_images.go:92] duration metric: took 1.521359625s to LoadCachedImages
	W0910 11:11:45.135473    5456 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0910 11:11:45.135479    5456 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0910 11:11:45.135536    5456 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-163000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 11:11:45.135593    5456 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 11:11:45.155477    5456 cni.go:84] Creating CNI manager for ""
	I0910 11:11:45.155488    5456 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:11:45.155492    5456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 11:11:45.155500    5456 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-163000 NodeName:stopped-upgrade-163000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 11:11:45.155584    5456 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-163000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 11:11:45.155637    5456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0910 11:11:45.158415    5456 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 11:11:45.158447    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 11:11:45.161482    5456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0910 11:11:45.166719    5456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 11:11:45.172071    5456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0910 11:11:45.177711    5456 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0910 11:11:45.179049    5456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 11:11:45.182495    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:45.242826    5456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 11:11:45.252808    5456 certs.go:68] Setting up /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000 for IP: 10.0.2.15
	I0910 11:11:45.252820    5456 certs.go:194] generating shared ca certs ...
	I0910 11:11:45.252829    5456 certs.go:226] acquiring lock for ca certs: {Name:mk5b237e8da18ff05d2622f0be5a14dbe0d9b4f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.253001    5456 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key
	I0910 11:11:45.253051    5456 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key
	I0910 11:11:45.253057    5456 certs.go:256] generating profile certs ...
	I0910 11:11:45.253131    5456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.key
	I0910 11:11:45.253151    5456 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc
	I0910 11:11:45.253162    5456 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0910 11:11:45.296715    5456 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc ...
	I0910 11:11:45.296726    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc: {Name:mk2707e74b1ac3f5acd434d600070bb62d00ad14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.297033    5456 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc ...
	I0910 11:11:45.297038    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc: {Name:mk1364e97b609150ccb4151ef7919a71c67a2736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.297164    5456 certs.go:381] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt
	I0910 11:11:45.297298    5456 certs.go:385] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key
	I0910 11:11:45.297455    5456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/proxy-client.key
	I0910 11:11:45.297589    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795.pem (1338 bytes)
	W0910 11:11:45.297622    5456 certs.go:480] ignoring /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795_empty.pem, impossibly tiny 0 bytes
	I0910 11:11:45.297627    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 11:11:45.297650    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem (1078 bytes)
	I0910 11:11:45.297672    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem (1123 bytes)
	I0910 11:11:45.297695    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem (1675 bytes)
	I0910 11:11:45.297734    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem (1708 bytes)
	I0910 11:11:45.298062    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 11:11:45.305107    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 11:11:45.312227    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 11:11:45.319359    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 11:11:45.326205    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 11:11:45.333296    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 11:11:45.340588    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 11:11:45.347501    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 11:11:45.354259    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 11:11:45.361185    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795.pem --> /usr/share/ca-certificates/1795.pem (1338 bytes)
	I0910 11:11:45.368331    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem --> /usr/share/ca-certificates/17952.pem (1708 bytes)
	I0910 11:11:45.374893    5456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 11:11:45.379553    5456 ssh_runner.go:195] Run: openssl version
	I0910 11:11:45.381373    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17952.pem && ln -fs /usr/share/ca-certificates/17952.pem /etc/ssl/certs/17952.pem"
	I0910 11:11:45.384785    5456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17952.pem
	I0910 11:11:45.386239    5456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:44 /usr/share/ca-certificates/17952.pem
	I0910 11:11:45.386258    5456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17952.pem
	I0910 11:11:45.388001    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17952.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 11:11:45.390705    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 11:11:45.393612    5456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:11:45.395105    5456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:11:45.395129    5456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:11:45.396784    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 11:11:45.399753    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1795.pem && ln -fs /usr/share/ca-certificates/1795.pem /etc/ssl/certs/1795.pem"
	I0910 11:11:45.402549    5456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1795.pem
	I0910 11:11:45.403847    5456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:44 /usr/share/ca-certificates/1795.pem
	I0910 11:11:45.403863    5456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1795.pem
	I0910 11:11:45.405389    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1795.pem /etc/ssl/certs/51391683.0"
	I0910 11:11:45.408598    5456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 11:11:45.410077    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 11:11:45.412151    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 11:11:45.413979    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 11:11:45.415900    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 11:11:45.417663    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 11:11:45.419413    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 11:11:45.421231    5456 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:11:45.421298    5456 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 11:11:45.431780    5456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 11:11:45.434875    5456 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 11:11:45.434881    5456 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 11:11:45.434903    5456 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 11:11:45.438477    5456 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 11:11:45.438783    5456 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-163000" does not appear in /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:11:45.438876    5456 kubeconfig.go:62] /Users/jenkins/minikube-integration/19598-1276/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-163000" cluster setting kubeconfig missing "stopped-upgrade-163000" context setting]
	I0910 11:11:45.439077    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.439555    5456 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10692e200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 11:11:45.439906    5456 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 11:11:45.442681    5456 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-163000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0910 11:11:45.442686    5456 kubeadm.go:1160] stopping kube-system containers ...
	I0910 11:11:45.442727    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 11:11:45.453186    5456 docker.go:483] Stopping containers: [29be8057a1dd 469710f91457 0871f0cf5a37 8d2c0af3a670 8db99da6a98d 4fd21312b6dc 6555df8fa22d 938546a9d4bc]
	I0910 11:11:45.453250    5456 ssh_runner.go:195] Run: docker stop 29be8057a1dd 469710f91457 0871f0cf5a37 8d2c0af3a670 8db99da6a98d 4fd21312b6dc 6555df8fa22d 938546a9d4bc
	I0910 11:11:45.468465    5456 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 11:11:45.474160    5456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 11:11:45.477246    5456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 11:11:45.477252    5456 kubeadm.go:157] found existing configuration files:
	
	I0910 11:11:45.477278    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0910 11:11:45.480414    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 11:11:45.480437    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 11:11:45.482985    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0910 11:11:45.485441    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 11:11:45.485467    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 11:11:45.488592    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0910 11:11:45.491414    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 11:11:45.491435    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 11:11:45.494114    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0910 11:11:45.497149    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 11:11:45.497173    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 11:11:45.500065    5456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 11:11:45.502666    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:45.524129    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.237683    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.351537    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.381026    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.403256    5456 api_server.go:52] waiting for apiserver process to appear ...
	I0910 11:11:46.403335    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:11:42.252275    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:42.252286    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:42.256386    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:42.256395    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:42.291467    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:42.291478    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:42.306187    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:42.306198    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:42.320935    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:42.320946    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:42.336217    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:42.336228    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:42.353991    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:42.354002    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:42.365843    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:42.365854    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:42.377171    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:42.377182    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:42.389752    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:42.389764    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:42.401357    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:42.401371    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:44.925838    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:46.905441    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:11:47.405364    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:11:47.409313    5456 api_server.go:72] duration metric: took 1.006086959s to wait for apiserver process to appear ...
	I0910 11:11:47.409322    5456 api_server.go:88] waiting for apiserver healthz status ...
	I0910 11:11:47.409335    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:49.927949    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:49.928043    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:49.952663    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:49.952738    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:49.964326    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:49.964403    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:49.975370    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:49.975441    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:49.987893    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:49.987980    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:49.999631    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:49.999707    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:50.010718    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:50.010789    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:50.022193    5250 logs.go:276] 0 containers: []
	W0910 11:11:50.022204    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:50.022267    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:50.033856    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:50.033875    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:50.033881    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:50.049335    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:50.049346    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:50.088323    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:50.088335    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:50.102721    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:50.102732    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:50.120616    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:50.120627    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:50.132616    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:50.132628    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:50.157911    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:50.157924    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:50.162584    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:50.162591    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:50.177674    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:50.177685    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:50.194873    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:50.194884    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:50.206529    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:50.206543    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:50.249008    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:50.249025    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:50.263772    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:50.263783    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:50.275470    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:50.275482    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:11:50.287763    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:50.287776    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:50.302041    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:50.302052    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:50.314486    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:50.314498    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:52.411433    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:52.411536    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:52.827047    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:57.412238    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:57.412319    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:57.827432    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:57.827682    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:11:57.863836    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:11:57.863923    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:11:57.885853    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:11:57.885935    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:11:57.903243    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:11:57.903317    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:11:57.919009    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:11:57.919077    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:11:57.930221    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:11:57.930291    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:11:57.941008    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:11:57.941078    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:11:57.951315    5250 logs.go:276] 0 containers: []
	W0910 11:11:57.951327    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:11:57.951381    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:11:57.962096    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:11:57.962115    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:11:57.962122    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:11:57.972986    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:11:57.972997    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:11:57.995621    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:11:57.995630    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:11:58.014034    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:11:58.014048    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:11:58.027775    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:11:58.027786    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:11:58.043326    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:11:58.043338    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:11:58.060662    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:11:58.060675    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:11:58.071749    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:11:58.071759    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:11:58.085392    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:11:58.085402    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:11:58.106674    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:11:58.106685    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:11:58.121612    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:11:58.121624    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:11:58.133276    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:11:58.133290    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:11:58.148117    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:11:58.148128    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:11:58.161983    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:11:58.161997    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:11:58.204592    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:11:58.204603    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:11:58.208865    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:11:58.208872    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:11:58.243291    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:11:58.243301    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:00.756850    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:02.413214    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:02.413318    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:05.758921    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:05.759023    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:05.770802    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:05.770870    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:05.781493    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:05.781564    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:05.792609    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:05.792686    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:05.805467    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:05.805539    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:05.815749    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:05.815821    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:05.826539    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:05.826609    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:05.837720    5250 logs.go:276] 0 containers: []
	W0910 11:12:05.837731    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:05.837790    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:05.848002    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:05.848019    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:05.848025    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:05.860662    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:05.860674    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:05.873413    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:05.873423    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:05.917349    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:05.917364    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:05.930306    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:05.930317    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:05.942509    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:05.942529    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:05.953868    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:05.953877    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:05.976470    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:05.976480    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:05.994675    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:05.994689    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:06.009281    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:06.009294    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:06.023552    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:06.023563    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:06.038440    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:06.038457    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:06.059842    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:06.059858    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:06.071682    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:06.071692    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:06.076014    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:06.076021    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:06.113295    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:06.113308    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:06.128275    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:06.128286    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:07.414176    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:07.414206    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:08.642384    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:12.415197    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:12.415278    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:13.642642    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:13.642863    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:13.663990    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:13.664098    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:13.678811    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:13.678889    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:13.690732    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:13.690808    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:13.701309    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:13.701379    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:13.711452    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:13.711525    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:13.722069    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:13.722135    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:13.732396    5250 logs.go:276] 0 containers: []
	W0910 11:12:13.732408    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:13.732471    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:13.745076    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:13.745093    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:13.745099    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:13.756697    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:13.756710    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:13.768042    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:13.768054    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:13.779245    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:13.779255    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:13.821774    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:13.821783    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:13.849517    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:13.849531    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:13.864580    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:13.864590    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:13.883241    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:13.883255    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:13.925295    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:13.925308    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:13.940183    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:13.940194    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:13.955386    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:13.955400    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:13.966971    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:13.966984    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:13.979959    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:13.979970    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:14.003155    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:14.003164    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:14.007724    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:14.007731    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:14.020711    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:14.020724    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:14.034685    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:14.034696    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:16.548194    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:17.416734    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:17.416777    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:21.550730    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:21.550904    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:21.569129    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:21.569226    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:21.582862    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:21.582937    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:21.594496    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:21.594563    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:21.607685    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:21.607755    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:21.617997    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:21.618070    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:21.628618    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:21.628684    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:21.638882    5250 logs.go:276] 0 containers: []
	W0910 11:12:21.638897    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:21.638954    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:21.649904    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:21.649923    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:21.649929    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:21.662443    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:21.662455    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:21.674020    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:21.674031    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:21.695865    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:21.695879    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:21.713961    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:21.713972    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:21.725978    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:21.725991    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:21.741279    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:21.741290    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:21.754928    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:21.754939    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:21.772462    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:21.772472    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:21.784582    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:21.784595    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:21.806894    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:21.806901    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:21.848389    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:21.848403    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:21.852781    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:21.852790    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:21.887660    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:21.887674    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:21.902631    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:21.902641    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:21.922176    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:21.922190    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:21.937101    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:21.937111    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:22.418892    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:22.418915    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:24.451089    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:27.420969    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:27.421046    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:29.453275    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:29.453475    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:29.481464    5250 logs.go:276] 2 containers: [20b5cbeb8dff 296a4d729754]
	I0910 11:12:29.481580    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:29.499637    5250 logs.go:276] 2 containers: [f646b2d3be9d a1e228399b97]
	I0910 11:12:29.499725    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:29.513031    5250 logs.go:276] 1 containers: [69e76299b88b]
	I0910 11:12:29.513101    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:29.524858    5250 logs.go:276] 2 containers: [f0a41cce875a 13966ceb0569]
	I0910 11:12:29.524931    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:29.535549    5250 logs.go:276] 1 containers: [6d87ae41ca53]
	I0910 11:12:29.535619    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:29.546188    5250 logs.go:276] 2 containers: [8eff784862ab 31602e89a910]
	I0910 11:12:29.546265    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:29.556425    5250 logs.go:276] 0 containers: []
	W0910 11:12:29.556451    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:29.556509    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:29.567640    5250 logs.go:276] 2 containers: [0083eb8d401d a53439bdadb5]
	I0910 11:12:29.567664    5250 logs.go:123] Gathering logs for kube-scheduler [f0a41cce875a] ...
	I0910 11:12:29.567670    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0a41cce875a"
	I0910 11:12:29.581717    5250 logs.go:123] Gathering logs for storage-provisioner [0083eb8d401d] ...
	I0910 11:12:29.581728    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0083eb8d401d"
	I0910 11:12:29.593329    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:12:29.593339    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:29.606890    5250 logs.go:123] Gathering logs for kube-apiserver [20b5cbeb8dff] ...
	I0910 11:12:29.606904    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20b5cbeb8dff"
	I0910 11:12:29.620982    5250 logs.go:123] Gathering logs for kube-apiserver [296a4d729754] ...
	I0910 11:12:29.620993    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296a4d729754"
	I0910 11:12:29.633648    5250 logs.go:123] Gathering logs for etcd [a1e228399b97] ...
	I0910 11:12:29.633660    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e228399b97"
	I0910 11:12:29.671471    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:29.671481    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:29.695483    5250 logs.go:123] Gathering logs for kube-proxy [6d87ae41ca53] ...
	I0910 11:12:29.695489    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d87ae41ca53"
	I0910 11:12:29.707337    5250 logs.go:123] Gathering logs for kube-controller-manager [8eff784862ab] ...
	I0910 11:12:29.707348    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff784862ab"
	I0910 11:12:29.725877    5250 logs.go:123] Gathering logs for kube-controller-manager [31602e89a910] ...
	I0910 11:12:29.725890    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31602e89a910"
	I0910 11:12:29.738394    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:29.738408    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:29.779103    5250 logs.go:123] Gathering logs for coredns [69e76299b88b] ...
	I0910 11:12:29.779113    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e76299b88b"
	I0910 11:12:29.790280    5250 logs.go:123] Gathering logs for kube-scheduler [13966ceb0569] ...
	I0910 11:12:29.790293    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13966ceb0569"
	I0910 11:12:29.804984    5250 logs.go:123] Gathering logs for storage-provisioner [a53439bdadb5] ...
	I0910 11:12:29.804995    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a53439bdadb5"
	I0910 11:12:29.820820    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:29.820832    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:29.825038    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:29.825044    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:29.859507    5250 logs.go:123] Gathering logs for etcd [f646b2d3be9d] ...
	I0910 11:12:29.859518    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f646b2d3be9d"
	I0910 11:12:32.423477    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:32.423502    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:32.374177    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:37.374349    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:37.374459    5250 kubeadm.go:597] duration metric: took 4m4.290381791s to restartPrimaryControlPlane
	W0910 11:12:37.374510    5250 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 11:12:37.374529    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0910 11:12:38.374760    5250 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.000242333s)
	I0910 11:12:38.374829    5250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 11:12:38.379969    5250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 11:12:38.382850    5250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 11:12:38.386003    5250 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 11:12:38.386012    5250 kubeadm.go:157] found existing configuration files:
	
	I0910 11:12:38.386037    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/admin.conf
	I0910 11:12:38.388771    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 11:12:38.388798    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 11:12:38.391292    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/kubelet.conf
	I0910 11:12:38.394391    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 11:12:38.394417    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 11:12:38.397577    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/controller-manager.conf
	I0910 11:12:38.400036    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 11:12:38.400056    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 11:12:38.402992    5250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/scheduler.conf
	I0910 11:12:38.405961    5250 kubeadm.go:163] "https://control-plane.minikube.internal:50307" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50307 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 11:12:38.405984    5250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 11:12:38.408569    5250 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 11:12:38.426481    5250 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0910 11:12:38.426571    5250 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 11:12:38.474407    5250 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 11:12:38.474457    5250 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 11:12:38.474504    5250 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 11:12:38.524920    5250 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 11:12:38.529981    5250 out.go:235]   - Generating certificates and keys ...
	I0910 11:12:38.530016    5250 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 11:12:38.530051    5250 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 11:12:38.530097    5250 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 11:12:38.530129    5250 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 11:12:38.530164    5250 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 11:12:38.530189    5250 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 11:12:38.530220    5250 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 11:12:38.530265    5250 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 11:12:38.530298    5250 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 11:12:38.530339    5250 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 11:12:38.530365    5250 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 11:12:38.530391    5250 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 11:12:38.685735    5250 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 11:12:38.933938    5250 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 11:12:39.016659    5250 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 11:12:39.184718    5250 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 11:12:39.214325    5250 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 11:12:39.215544    5250 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 11:12:39.215570    5250 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 11:12:39.300912    5250 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 11:12:37.424829    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:37.424851    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:39.308384    5250 out.go:235]   - Booting up control plane ...
	I0910 11:12:39.308446    5250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 11:12:39.308484    5250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 11:12:39.308521    5250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 11:12:39.308572    5250 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 11:12:39.308662    5250 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 11:12:43.806224    5250 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502755 seconds
	I0910 11:12:43.806374    5250 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 11:12:43.810927    5250 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 11:12:44.324816    5250 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 11:12:44.324947    5250 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-978000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 11:12:44.830426    5250 kubeadm.go:310] [bootstrap-token] Using token: 3orerm.xyjpdf2qf6njoeux
	I0910 11:12:44.836828    5250 out.go:235]   - Configuring RBAC rules ...
	I0910 11:12:44.836900    5250 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 11:12:44.836950    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 11:12:44.841426    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 11:12:44.842290    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 11:12:44.843158    5250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 11:12:44.843904    5250 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 11:12:44.847148    5250 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 11:12:45.022591    5250 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 11:12:45.234977    5250 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 11:12:45.235424    5250 kubeadm.go:310] 
	I0910 11:12:45.235458    5250 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 11:12:45.235463    5250 kubeadm.go:310] 
	I0910 11:12:45.235561    5250 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 11:12:45.235594    5250 kubeadm.go:310] 
	I0910 11:12:45.235622    5250 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 11:12:45.235659    5250 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 11:12:45.235694    5250 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 11:12:45.235697    5250 kubeadm.go:310] 
	I0910 11:12:45.235729    5250 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 11:12:45.235734    5250 kubeadm.go:310] 
	I0910 11:12:45.235758    5250 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 11:12:45.235763    5250 kubeadm.go:310] 
	I0910 11:12:45.235789    5250 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 11:12:45.235834    5250 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 11:12:45.235876    5250 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 11:12:45.235883    5250 kubeadm.go:310] 
	I0910 11:12:45.235923    5250 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 11:12:45.235986    5250 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 11:12:45.235990    5250 kubeadm.go:310] 
	I0910 11:12:45.236036    5250 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3orerm.xyjpdf2qf6njoeux \
	I0910 11:12:45.236256    5250 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 \
	I0910 11:12:45.236266    5250 kubeadm.go:310] 	--control-plane 
	I0910 11:12:45.236268    5250 kubeadm.go:310] 
	I0910 11:12:45.236303    5250 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 11:12:45.236305    5250 kubeadm.go:310] 
	I0910 11:12:45.236340    5250 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3orerm.xyjpdf2qf6njoeux \
	I0910 11:12:45.236398    5250 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 
	I0910 11:12:45.236446    5250 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 11:12:45.236478    5250 cni.go:84] Creating CNI manager for ""
	I0910 11:12:45.236488    5250 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:12:45.240678    5250 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 11:12:45.247605    5250 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 11:12:45.250578    5250 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 11:12:45.255564    5250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 11:12:45.255617    5250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 11:12:45.255658    5250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-978000 minikube.k8s.io/updated_at=2024_09_10T11_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=running-upgrade-978000 minikube.k8s.io/primary=true
	I0910 11:12:45.298345    5250 kubeadm.go:1113] duration metric: took 42.763583ms to wait for elevateKubeSystemPrivileges
	I0910 11:12:45.298367    5250 ops.go:34] apiserver oom_adj: -16
	I0910 11:12:45.298374    5250 kubeadm.go:394] duration metric: took 4m12.228942208s to StartCluster
	I0910 11:12:45.298385    5250 settings.go:142] acquiring lock: {Name:mkc4479acb7c6185024679cd35acf0055f682c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:12:45.298478    5250 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:12:45.298864    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:12:45.299071    5250 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:12:45.299102    5250 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 11:12:45.299143    5250 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-978000"
	I0910 11:12:45.299156    5250 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-978000"
	W0910 11:12:45.299160    5250 addons.go:243] addon storage-provisioner should already be in state true
	I0910 11:12:45.299171    5250 host.go:66] Checking if "running-upgrade-978000" exists ...
	I0910 11:12:45.299168    5250 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-978000"
	I0910 11:12:45.299239    5250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-978000"
	I0910 11:12:45.299302    5250 config.go:182] Loaded profile config "running-upgrade-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:12:45.300648    5250 out.go:177] * Verifying Kubernetes components...
	I0910 11:12:45.301397    5250 kapi.go:59] client config for running-upgrade-978000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/running-upgrade-978000/client.key", CAFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ff2200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 11:12:45.306991    5250 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-978000"
	W0910 11:12:45.306996    5250 addons.go:243] addon default-storageclass should already be in state true
	I0910 11:12:45.307004    5250 host.go:66] Checking if "running-upgrade-978000" exists ...
	I0910 11:12:45.307518    5250 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 11:12:45.307523    5250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 11:12:45.307529    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	I0910 11:12:45.309549    5250 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:12:42.426919    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:42.426963    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:45.312662    5250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:12:45.316697    5250 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:12:45.316703    5250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 11:12:45.316710    5250 sshutil.go:53] new ssh client: &{IP:localhost Port:50275 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/running-upgrade-978000/id_rsa Username:docker}
	I0910 11:12:45.404248    5250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 11:12:45.409067    5250 api_server.go:52] waiting for apiserver process to appear ...
	I0910 11:12:45.409112    5250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:12:45.412969    5250 api_server.go:72] duration metric: took 113.889584ms to wait for apiserver process to appear ...
	I0910 11:12:45.412976    5250 api_server.go:88] waiting for apiserver healthz status ...
	I0910 11:12:45.412982    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:45.448854    5250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 11:12:45.459223    5250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:12:45.781143    5250 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 11:12:45.781156    5250 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 11:12:47.429092    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:47.429256    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:47.443860    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:12:47.443927    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:47.455656    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:12:47.455732    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:47.466342    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:12:47.466417    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:47.476707    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:12:47.476781    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:47.487078    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:12:47.487151    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:47.499286    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:12:47.499362    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:47.509912    5456 logs.go:276] 0 containers: []
	W0910 11:12:47.509929    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:47.509992    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:47.520483    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:12:47.520501    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:12:47.520507    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:12:47.531847    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:47.531859    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:47.570276    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:12:47.570287    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:12:47.610246    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:12:47.610257    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:12:47.622311    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:12:47.622323    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:12:47.633738    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:12:47.633749    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:12:47.652180    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:12:47.652193    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:12:47.663556    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:47.663567    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:47.667874    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:47.667881    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:47.742827    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:12:47.742842    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:12:47.759440    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:12:47.759451    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:12:47.773008    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:12:47.773019    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:12:47.791726    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:12:47.791743    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:12:47.804965    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:47.804976    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:47.830162    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:12:47.830175    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:47.842373    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:12:47.842387    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:12:47.857805    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:12:47.857816    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:12:50.370958    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:50.414996    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:50.415030    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:55.371481    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:55.371589    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:55.382543    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:12:55.382625    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:55.392838    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:12:55.392902    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:55.403304    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:12:55.403373    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:55.413938    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:12:55.414027    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:55.425670    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:12:55.425740    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:55.436384    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:12:55.436458    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:55.446677    5456 logs.go:276] 0 containers: []
	W0910 11:12:55.446689    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:55.446747    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:55.457558    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:12:55.457575    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:55.457582    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:55.496158    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:12:55.496172    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:12:55.508689    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:55.508703    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:55.513216    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:12:55.513224    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:12:55.526892    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:12:55.526905    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:12:55.539119    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:12:55.539134    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:12:55.551945    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:12:55.551958    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:12:55.589720    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:12:55.589731    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:12:55.604784    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:55.604795    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:55.629476    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:12:55.629490    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:55.641454    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:55.641466    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:55.679842    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:12:55.679854    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:12:55.694117    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:12:55.694127    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:12:55.708532    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:12:55.708544    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:12:55.719981    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:12:55.720001    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:12:55.737711    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:12:55.737720    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:12:55.752800    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:12:55.752815    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:12:55.415363    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:55.415378    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:58.266307    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:00.415558    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:00.415596    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:03.267602    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:03.267888    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:03.297771    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:03.297898    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:03.315387    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:03.315486    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:03.330376    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:03.330452    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:03.341718    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:03.341784    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:03.351645    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:03.351722    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:03.364447    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:03.364532    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:03.374724    5456 logs.go:276] 0 containers: []
	W0910 11:13:03.374735    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:03.374796    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:03.384885    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:03.384902    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:03.384919    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:03.400173    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:03.400184    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:03.411591    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:03.411602    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:03.423003    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:03.423013    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:03.433906    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:03.433916    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:03.445169    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:03.445181    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:03.456834    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:03.456845    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:03.485257    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:03.485267    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:03.502702    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:03.502714    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:03.539133    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:03.539145    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:03.553291    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:03.553302    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:03.590343    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:03.590355    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:03.604854    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:03.604864    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:03.630122    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:03.630131    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:03.642836    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:03.642848    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:03.681396    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:03.681406    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:03.685705    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:03.685714    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:06.202265    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:05.415873    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:05.415894    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:11.204562    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:11.204710    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:11.217952    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:11.218036    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:11.229103    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:11.229182    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:11.240483    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:11.240556    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:11.251926    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:11.252004    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:11.262574    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:11.262644    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:11.273040    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:11.273112    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:11.285906    5456 logs.go:276] 0 containers: []
	W0910 11:13:11.285917    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:11.285977    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:11.301424    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:11.301440    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:11.301446    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:11.338015    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:11.338027    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:11.351904    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:11.351915    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:11.384522    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:11.384532    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:11.403270    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:11.403280    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:11.407894    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:11.407903    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:11.419431    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:11.419442    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:11.431736    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:11.431748    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:11.471682    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:11.471696    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:11.510101    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:11.510112    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:11.522356    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:11.522368    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:11.534212    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:11.534225    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:11.553201    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:11.553215    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:11.564597    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:11.564615    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:11.583436    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:11.583448    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:11.595681    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:11.595696    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:11.606973    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:11.606984    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:10.416405    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:10.416455    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:15.417142    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:15.417168    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0910 11:13:15.782820    5250 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0910 11:13:15.787249    5250 out.go:177] * Enabled addons: storage-provisioner
	I0910 11:13:14.135914    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:15.798065    5250 addons.go:510] duration metric: took 30.499767375s for enable addons: enabled=[storage-provisioner]
	I0910 11:13:19.136964    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:19.137185    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:19.155457    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:19.155530    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:19.168841    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:19.168914    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:19.180523    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:19.180593    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:19.191374    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:19.191443    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:19.202151    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:19.202210    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:19.212559    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:19.212623    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:19.222687    5456 logs.go:276] 0 containers: []
	W0910 11:13:19.222699    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:19.222753    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:19.233211    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:19.233229    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:19.233235    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:19.247615    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:19.247626    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:19.259138    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:19.259153    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:19.282961    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:19.282969    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:19.300221    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:19.300231    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:19.311929    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:19.311944    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:19.324243    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:19.324254    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:19.328406    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:19.328413    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:19.340154    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:19.340164    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:19.351488    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:19.351501    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:19.365070    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:19.365081    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:19.378947    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:19.378957    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:19.390272    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:19.390282    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:19.402341    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:19.402352    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:19.415456    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:19.415467    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:19.452727    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:19.452743    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:19.493007    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:19.493020    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:20.418004    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:20.418043    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:22.037307    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:25.419156    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:25.419221    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:27.039695    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:27.039820    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:27.053188    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:27.053269    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:27.066942    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:27.067018    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:27.077723    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:27.077797    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:27.088323    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:27.088398    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:27.099929    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:27.099999    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:27.110583    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:27.110657    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:27.121238    5456 logs.go:276] 0 containers: []
	W0910 11:13:27.121251    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:27.121310    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:27.132049    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:27.132068    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:27.132074    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:27.166873    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:27.166885    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:27.183938    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:27.183949    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:27.195719    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:27.195733    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:27.200305    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:27.200312    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:27.238694    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:27.238705    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:27.252896    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:27.252907    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:27.264677    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:27.264687    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:27.276215    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:27.276225    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:27.287983    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:27.287994    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:27.299617    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:27.299629    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:27.314209    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:27.314220    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:27.325617    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:27.325631    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:27.343302    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:27.343312    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:27.355134    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:27.355146    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:27.393428    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:27.393437    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:27.405720    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:27.405731    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:29.929639    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:30.420740    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:30.420782    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:34.931837    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:34.932074    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:34.963164    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:34.963274    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:34.981029    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:34.981123    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:34.995042    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:34.995121    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:35.007011    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:35.007080    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:35.022377    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:35.022468    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:35.033552    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:35.033626    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:35.043794    5456 logs.go:276] 0 containers: []
	W0910 11:13:35.043804    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:35.043865    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:35.054256    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:35.054272    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:35.054278    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:35.071582    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:35.071593    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:35.090360    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:35.090371    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:35.115470    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:35.115483    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:35.127640    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:35.127650    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:35.131846    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:35.131852    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:35.146036    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:35.146049    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:35.161233    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:35.161246    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:35.173223    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:35.173237    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:35.184947    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:35.184957    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:35.221471    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:35.221483    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:35.235851    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:35.235864    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:35.249511    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:35.249522    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:35.260442    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:35.260455    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:35.272004    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:35.272015    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:35.284035    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:35.284048    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:35.318366    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:35.318378    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:35.422631    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:35.422655    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:37.859642    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:40.424712    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:40.424757    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:42.862045    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:42.862199    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:42.876541    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:42.876628    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:42.888234    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:42.888300    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:42.898728    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:42.898809    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:42.909165    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:42.909235    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:42.919507    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:42.919579    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:42.930226    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:42.930298    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:42.943829    5456 logs.go:276] 0 containers: []
	W0910 11:13:42.943842    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:42.943903    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:42.954678    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:42.954697    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:42.954701    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:42.968539    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:42.968548    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:42.980152    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:42.980163    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:42.992468    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:42.992481    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:42.996817    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:42.996826    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:43.031196    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:43.031209    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:43.078139    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:43.078152    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:43.094791    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:43.094804    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:43.110302    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:43.110316    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:43.148726    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:43.148738    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:43.165959    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:43.165970    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:43.179849    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:43.179860    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:43.204742    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:43.204750    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:43.216791    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:43.216804    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:43.228209    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:43.228220    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:43.239602    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:43.239614    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:43.257190    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:43.257200    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:45.772270    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:45.426960    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:45.427049    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:45.440981    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:13:45.441056    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:45.452251    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:13:45.452331    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:45.466791    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:13:45.466871    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:45.478839    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:13:45.478909    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:45.489949    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:13:45.490026    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:45.500639    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:13:45.500710    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:45.510617    5250 logs.go:276] 0 containers: []
	W0910 11:13:45.510628    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:45.510690    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:45.521213    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:13:45.521228    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:13:45.521234    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:13:45.535067    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:13:45.535081    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:13:45.547617    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:13:45.547631    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:13:45.562434    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:13:45.562447    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:13:45.580845    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:45.580857    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:45.618825    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:45.618839    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:45.623444    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:45.623451    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:45.659303    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:13:45.659318    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:13:45.683354    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:13:45.683368    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:13:45.700573    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:13:45.700586    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:13:45.712500    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:13:45.712510    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:13:45.726361    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:45.726374    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:45.751372    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:13:45.751380    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:50.774046    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:50.774217    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:50.786214    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:50.786294    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:50.796897    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:50.796970    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:50.807821    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:50.807891    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:50.818943    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:50.819015    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:50.829244    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:50.829314    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:50.845098    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:50.845172    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:50.855652    5456 logs.go:276] 0 containers: []
	W0910 11:13:50.855668    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:50.855729    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:50.866630    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:50.866650    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:50.866656    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:50.903218    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:50.903230    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:50.921723    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:50.921735    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:50.933525    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:50.933539    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:50.950686    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:50.950696    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:50.962659    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:50.962671    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:50.967427    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:50.967433    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:51.012437    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:51.012448    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:51.036674    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:51.036683    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:51.051245    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:51.051257    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:51.065237    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:51.065247    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:51.080392    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:51.080403    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:51.092043    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:51.092054    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:51.105388    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:51.105399    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:51.144318    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:51.144330    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:51.156682    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:51.156695    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:51.168393    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:51.168403    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:48.264665    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:53.682813    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:53.267199    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:53.267407    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:53.289929    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:13:53.290047    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:53.309204    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:13:53.309287    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:53.321710    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:13:53.321784    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:53.332258    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:13:53.332331    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:53.342928    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:13:53.343001    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:53.353535    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:13:53.353606    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:53.363419    5250 logs.go:276] 0 containers: []
	W0910 11:13:53.363430    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:53.363485    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:53.374421    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:13:53.374439    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:53.374445    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:53.412707    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:53.412719    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:53.417003    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:53.417012    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:53.455895    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:13:53.455907    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:13:53.470573    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:13:53.470585    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:13:53.482727    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:13:53.482737    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:13:53.494828    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:13:53.494839    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:13:53.518127    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:13:53.518141    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:13:53.533036    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:13:53.533047    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:53.544629    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:13:53.544640    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:13:53.558517    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:13:53.558531    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:13:53.576926    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:13:53.576938    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:13:53.589150    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:53.589161    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:56.116366    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:58.684993    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:58.685199    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:58.702547    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:58.702645    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:58.717727    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:58.717801    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:58.729414    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:58.729481    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:58.741520    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:58.741593    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:58.751426    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:58.751491    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:58.761684    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:58.761752    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:58.784889    5456 logs.go:276] 0 containers: []
	W0910 11:13:58.784901    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:58.784964    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:58.795239    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:58.795258    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:58.795265    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:58.807054    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:58.807066    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:58.819644    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:58.819656    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:58.831711    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:58.831725    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:58.849573    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:58.849582    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:58.860630    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:58.860642    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:58.886525    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:58.886535    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:58.944383    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:58.944397    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:58.958789    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:58.958800    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:58.970733    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:58.970746    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:58.982190    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:58.982202    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:59.020129    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:59.020140    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:59.033473    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:59.033484    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:59.044804    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:59.044816    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:59.057471    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:59.057483    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:59.061814    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:59.061823    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:59.097098    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:59.097111    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:01.612568    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:01.118809    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:01.119032    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:01.143025    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:01.143152    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:01.160591    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:01.160676    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:01.173356    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:01.173431    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:01.184537    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:01.184608    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:01.195362    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:01.195433    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:01.205785    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:01.205862    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:01.216104    5250 logs.go:276] 0 containers: []
	W0910 11:14:01.216117    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:01.216174    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:01.226735    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:01.226752    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:01.226760    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:01.231514    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:01.231522    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:01.245382    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:01.245396    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:01.264933    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:01.264944    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:01.279064    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:01.279077    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:01.290837    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:01.290851    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:01.302685    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:01.302696    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:01.340703    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:01.340714    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:01.354749    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:01.354760    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:01.367038    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:01.367050    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:01.384323    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:01.384336    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:01.408791    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:01.408799    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:01.420335    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:01.420345    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:06.614764    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:06.615003    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:03.955068    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:06.640996    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:06.641118    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:06.660443    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:06.660530    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:06.673049    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:06.673124    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:06.684045    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:06.684120    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:06.700916    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:06.700979    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:06.712170    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:06.712235    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:06.722336    5456 logs.go:276] 0 containers: []
	W0910 11:14:06.722347    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:06.722406    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:06.732841    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:06.732858    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:06.732863    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:06.736994    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:06.737002    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:06.750679    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:06.750689    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:06.767893    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:06.767906    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:06.781698    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:06.781709    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:06.803435    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:06.803446    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:06.814740    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:06.814751    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:06.825987    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:06.825999    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:06.837546    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:06.837558    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:06.850045    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:06.850054    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:06.860926    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:06.860940    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:06.879096    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:06.879106    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:06.896639    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:06.896652    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:06.908482    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:06.908496    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:06.946584    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:06.946596    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:06.987009    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:06.987019    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:07.024685    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:07.024696    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:09.549083    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:08.956067    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:08.956367    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:08.985539    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:08.985671    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:09.003305    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:09.003393    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:09.016873    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:09.016953    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:09.028422    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:09.028491    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:09.038561    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:09.038637    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:09.048687    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:09.048763    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:09.058646    5250 logs.go:276] 0 containers: []
	W0910 11:14:09.058658    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:09.058719    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:09.069511    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:09.069527    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:09.069532    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:09.081578    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:09.081589    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:09.095875    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:09.095888    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:09.121140    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:09.121150    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:09.133005    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:09.133022    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:09.145015    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:09.145026    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:09.185395    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:09.185408    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:09.190212    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:09.190218    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:09.227714    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:09.227726    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:09.240238    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:09.240250    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:09.264170    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:09.264180    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:09.282360    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:09.282372    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:09.296111    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:09.296121    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:11.809660    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:14.551208    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:14.551426    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:14.567469    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:14.567551    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:14.580329    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:14.580406    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:14.591818    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:14.591888    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:14.608906    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:14.608975    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:14.619568    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:14.619644    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:14.632994    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:14.633065    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:14.643262    5456 logs.go:276] 0 containers: []
	W0910 11:14:14.643277    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:14.643338    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:14.653711    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:14.653728    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:14.653734    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:14.658162    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:14.658172    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:14.694476    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:14.694487    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:14.734930    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:14.734941    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:14.751262    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:14.751275    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:14.767907    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:14.767922    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:14.805437    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:14.805451    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:14.816669    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:14.816679    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:14.829066    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:14.829076    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:14.841003    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:14.841012    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:14.852309    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:14.852320    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:14.863758    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:14.863768    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:14.888039    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:14.888048    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:14.904107    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:14.904117    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:14.918363    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:14.918373    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:14.929897    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:14.929906    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:14.947430    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:14.947442    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:16.811786    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:16.811895    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:16.827324    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:16.827397    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:16.837531    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:16.837604    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:16.847901    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:16.847973    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:16.858514    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:16.858579    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:16.868953    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:16.869025    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:16.880441    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:16.880516    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:16.890527    5250 logs.go:276] 0 containers: []
	W0910 11:14:16.890539    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:16.890592    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:16.901233    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:16.901248    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:16.901254    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:16.913093    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:16.913105    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:16.951163    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:16.951176    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:16.965897    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:16.965910    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:16.979980    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:16.979993    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:16.992055    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:16.992067    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:17.004202    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:17.004214    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:17.019519    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:17.019528    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:17.037003    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:17.037012    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:17.062143    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:17.062152    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:17.066810    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:17.066818    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:17.102169    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:17.102184    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:17.118420    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:17.118431    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:17.461321    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:19.631556    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:22.463516    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:22.463949    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:22.504874    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:22.505010    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:22.523266    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:22.523365    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:22.537698    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:22.537778    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:22.552401    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:22.552473    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:22.562361    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:22.562434    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:22.573000    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:22.573069    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:22.583459    5456 logs.go:276] 0 containers: []
	W0910 11:14:22.583477    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:22.583542    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:22.600665    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:22.600682    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:22.600687    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:22.611758    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:22.611770    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:22.629118    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:22.629130    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:22.642520    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:22.642533    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:22.654299    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:22.654310    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:22.666689    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:22.666700    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:22.671355    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:22.671361    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:22.682824    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:22.682838    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:22.697522    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:22.697531    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:22.711057    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:22.711067    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:22.750261    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:22.750277    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:22.790974    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:22.790988    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:22.809036    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:22.809047    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:22.821335    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:22.821346    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:22.833439    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:22.833451    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:22.844253    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:22.844264    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:22.866899    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:22.866907    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:25.403837    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:24.634030    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:24.634220    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:24.655532    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:24.655628    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:24.671537    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:24.671612    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:24.683122    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:24.683191    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:24.693500    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:24.693569    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:24.704341    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:24.704406    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:24.715226    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:24.715286    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:24.725681    5250 logs.go:276] 0 containers: []
	W0910 11:14:24.725693    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:24.725752    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:24.737526    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:24.737549    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:24.737557    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:24.742453    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:24.742461    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:24.757215    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:24.757226    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:24.769224    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:24.769233    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:24.797638    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:24.797648    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:24.814394    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:24.814406    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:24.826309    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:24.826324    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:24.838030    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:24.838041    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:24.860481    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:24.860491    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:24.898453    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:24.898462    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:24.932604    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:24.932615    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:24.947428    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:24.947439    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:24.964179    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:24.964191    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:30.405347    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:30.405701    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:30.435330    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:30.435465    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:30.455860    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:30.455939    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:30.469662    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:30.469736    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:30.480880    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:30.480950    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:30.492367    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:30.492432    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:30.509595    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:30.509666    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:30.519661    5456 logs.go:276] 0 containers: []
	W0910 11:14:30.519679    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:30.519741    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:30.530132    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:30.530148    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:30.530154    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:30.567356    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:30.567369    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:30.578997    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:30.579010    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:30.595239    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:30.595251    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:30.618108    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:30.618118    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:30.652159    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:30.652172    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:30.667335    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:30.667347    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:30.679172    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:30.679185    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:30.696713    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:30.696723    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:30.709886    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:30.709894    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:30.714191    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:30.714198    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:30.752720    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:30.752733    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:30.764102    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:30.764112    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:30.780672    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:30.780685    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:30.794939    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:30.794951    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:30.806735    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:30.806746    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:30.818561    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:30.818573    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:27.477791    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:33.332750    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:32.479019    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:32.479523    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:32.516842    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:32.516972    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:32.539348    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:32.539448    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:32.553846    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:32.553922    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:32.566092    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:32.566161    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:32.577394    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:32.577471    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:32.587993    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:32.588056    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:32.598111    5250 logs.go:276] 0 containers: []
	W0910 11:14:32.598121    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:32.598174    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:32.609883    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:32.609900    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:32.609905    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:32.624577    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:32.624587    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:32.636134    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:32.636149    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:32.656320    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:32.656331    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:32.668238    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:32.668247    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:32.685694    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:32.685705    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:32.724499    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:32.724509    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:32.728625    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:32.728631    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:32.740922    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:32.740937    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:32.752782    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:32.752796    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:32.779015    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:32.779033    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:32.790841    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:32.790852    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:32.849384    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:32.849397    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:35.372057    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:38.335270    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:38.335779    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:38.375147    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:38.375285    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:38.396055    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:38.396156    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:38.410803    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:38.410875    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:38.422934    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:38.423003    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:38.433949    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:38.434022    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:38.445354    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:38.445421    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:38.456150    5456 logs.go:276] 0 containers: []
	W0910 11:14:38.456161    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:38.456218    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:38.467025    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:38.467045    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:38.467050    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:38.481508    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:38.481521    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:38.494394    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:38.494405    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:38.506460    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:38.506472    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:38.548301    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:38.548315    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:38.560338    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:38.560348    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:38.577319    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:38.577331    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:38.591049    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:38.591059    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:38.627743    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:38.627755    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:38.639095    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:38.639107    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:38.650923    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:38.650936    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:38.670799    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:38.670811    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:38.682083    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:38.682094    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:38.705024    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:38.705032    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:38.741196    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:38.741205    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:38.755306    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:38.755315    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:38.770941    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:38.770951    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:41.279094    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:40.374280    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:40.374516    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:40.401146    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:40.401267    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:40.417199    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:40.417275    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:40.435479    5250 logs.go:276] 2 containers: [7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:40.435557    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:40.446493    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:40.446566    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:40.460924    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:40.460991    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:40.471477    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:40.471548    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:40.481275    5250 logs.go:276] 0 containers: []
	W0910 11:14:40.481288    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:40.481349    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:40.492081    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:40.492096    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:40.492102    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:40.509775    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:40.509788    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:40.533249    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:40.533258    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:40.538954    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:40.538962    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:40.553358    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:40.553372    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:40.567584    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:40.567594    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:40.581414    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:40.581423    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:40.593148    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:40.593161    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:40.611699    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:40.611712    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:40.624000    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:40.624010    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:40.635323    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:40.635333    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:40.672650    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:40.672658    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:40.707373    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:40.707385    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:46.281246    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:46.281438    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:46.304185    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:46.304283    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:46.319244    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:46.319345    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:46.331670    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:46.331736    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:46.342719    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:46.342792    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:46.352839    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:46.352906    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:46.367274    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:46.367350    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:46.380184    5456 logs.go:276] 0 containers: []
	W0910 11:14:46.380195    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:46.380248    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:46.390763    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:46.390785    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:46.390791    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:46.402386    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:46.402396    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:46.426676    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:46.426686    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:46.438015    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:46.438027    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:46.462267    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:46.462278    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:46.474088    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:46.474099    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:46.513205    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:46.513218    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:46.552087    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:46.552098    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:46.565129    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:46.565143    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:46.569691    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:46.569698    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:46.594539    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:46.594549    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:46.611743    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:46.611752    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:43.221977    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:46.625775    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:46.625788    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:46.637037    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:46.637050    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:46.648116    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:46.648128    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:46.660420    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:46.660435    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:46.694514    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:46.694528    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:49.208773    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:48.224147    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:48.224333    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:48.238901    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:48.238980    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:48.250189    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:48.250256    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:48.260695    5250 logs.go:276] 3 containers: [de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:48.260768    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:48.271011    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:48.271087    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:48.281507    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:48.281576    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:48.291719    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:48.291798    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:48.302690    5250 logs.go:276] 0 containers: []
	W0910 11:14:48.302703    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:48.302767    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:48.313161    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:48.313176    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:48.313183    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:48.327084    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:48.327098    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:48.339167    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:48.339180    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:48.350282    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:48.350293    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:48.387827    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:48.387841    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:48.399995    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:48.400005    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:48.411554    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:48.411564    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:48.432766    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:48.432780    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:48.458739    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:48.458747    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:48.495556    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:14:48.495567    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:14:48.507445    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:48.507458    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:48.520906    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:48.520921    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:48.535727    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:48.535737    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:48.540600    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:48.540609    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:51.056302    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:54.211005    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:54.211271    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:54.231519    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:54.231613    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:54.248436    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:54.248513    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:54.260497    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:54.260566    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:54.270940    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:54.271013    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:54.281257    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:54.281327    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:54.291684    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:54.291753    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:54.301849    5456 logs.go:276] 0 containers: []
	W0910 11:14:54.301861    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:54.301921    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:54.312024    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:54.312042    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:54.312047    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:54.325082    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:54.325092    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:54.343369    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:54.343379    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:54.355172    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:54.355182    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:54.366693    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:54.366706    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:54.403117    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:54.403128    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:54.416371    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:54.416386    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:54.452346    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:54.452356    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:54.470347    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:54.470359    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:54.482543    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:54.482554    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:54.495556    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:54.495566    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:54.512448    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:54.512459    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:54.526252    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:54.526265    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:54.537480    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:54.537492    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:54.561884    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:54.561894    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:54.566573    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:54.566582    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:54.581080    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:54.581095    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:56.058762    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:56.059088    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:56.094734    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:14:56.094870    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:56.113991    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:14:56.114077    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:56.128123    5250 logs.go:276] 3 containers: [de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:14:56.128206    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:56.139582    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:14:56.139665    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:56.150100    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:14:56.150165    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:56.160675    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:14:56.160747    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:56.171068    5250 logs.go:276] 0 containers: []
	W0910 11:14:56.171079    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:56.171142    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:56.181250    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:14:56.181272    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:14:56.181278    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:14:56.195748    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:14:56.195759    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:14:56.209725    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:56.209736    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:56.234953    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:14:56.234962    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:14:56.246842    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:14:56.246852    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:14:56.264941    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:56.264952    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:56.270977    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:56.270992    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:56.307731    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:14:56.307743    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:14:56.322341    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:14:56.322351    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:14:56.340080    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:14:56.340089    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:14:56.351717    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:14:56.351728    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:56.364910    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:56.364922    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:56.405175    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:14:56.405185    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:14:56.420224    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:14:56.420233    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:14:57.129192    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:58.937222    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:02.130439    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:02.130585    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:02.142274    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:02.142349    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:02.153013    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:02.153086    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:02.163762    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:02.163825    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:02.174485    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:02.174555    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:02.185085    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:02.185147    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:02.198831    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:02.198901    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:02.213527    5456 logs.go:276] 0 containers: []
	W0910 11:15:02.213541    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:02.213605    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:02.224535    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:02.224553    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:02.224559    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:02.235534    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:02.235545    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:02.256840    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:02.256850    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:02.280834    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:02.280841    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:02.322911    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:02.322921    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:02.336857    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:02.336867    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:02.370781    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:02.370793    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:02.384802    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:02.384812    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:02.404511    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:02.404521    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:02.416427    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:02.416437    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:02.430203    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:02.430213    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:02.441431    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:02.441442    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:02.479058    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:02.479072    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:02.483285    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:02.483291    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:02.495761    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:02.495772    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:02.507917    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:02.507928    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:02.520330    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:02.520343    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:05.033954    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:03.939464    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:03.939683    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:03.964067    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:03.964173    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:03.981020    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:03.981108    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:03.994395    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:03.994479    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:04.006232    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:04.006298    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:04.017440    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:04.017516    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:04.028017    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:04.028090    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:04.037952    5250 logs.go:276] 0 containers: []
	W0910 11:15:04.037963    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:04.038021    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:04.048604    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:04.048621    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:04.048627    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:04.063328    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:04.063343    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:04.074607    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:04.074617    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:04.110589    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:04.110600    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:04.125315    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:04.125328    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:04.136973    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:04.136983    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:04.149542    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:04.149553    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:04.167834    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:04.167844    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:04.184487    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:04.184497    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:04.189552    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:04.189559    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:04.203764    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:04.203774    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:04.215870    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:04.215880    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:04.228018    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:04.228029    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:04.268080    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:04.268088    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:04.292676    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:04.292686    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:06.807502    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:10.034308    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:10.034555    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:10.061301    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:10.061388    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:10.083168    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:10.083245    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:10.097471    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:10.097544    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:10.108808    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:10.108879    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:10.123598    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:10.123665    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:10.134791    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:10.134860    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:10.144697    5456 logs.go:276] 0 containers: []
	W0910 11:15:10.144710    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:10.144762    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:10.155735    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:10.155755    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:10.155761    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:10.166988    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:10.167000    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:10.181614    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:10.181624    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:10.193423    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:10.193434    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:10.206683    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:10.206694    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:10.218611    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:10.218622    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:10.256588    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:10.256597    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:10.268647    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:10.268660    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:10.280439    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:10.280451    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:10.293001    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:10.293011    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:10.307579    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:10.307590    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:10.342064    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:10.342076    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:10.356353    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:10.356363    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:10.396137    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:10.396153    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:10.407787    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:10.407801    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:10.424854    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:10.424864    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:10.447355    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:10.447363    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:11.809758    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:11.810152    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:11.838862    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:11.838984    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:11.856644    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:11.856727    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:11.870491    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:11.870569    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:11.882752    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:11.882826    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:11.893398    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:11.893466    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:11.904886    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:11.904959    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:11.918855    5250 logs.go:276] 0 containers: []
	W0910 11:15:11.918865    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:11.918919    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:11.929575    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:11.929591    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:11.929596    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:11.967393    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:11.967404    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:11.982163    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:11.982177    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:11.996223    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:11.996236    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:12.008926    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:12.008937    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:12.023635    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:12.023646    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:12.048312    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:12.048329    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:12.060771    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:12.060783    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:12.100967    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:12.100980    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:12.116962    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:12.116975    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:12.129924    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:12.129936    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:12.145318    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:12.145330    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:12.150201    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:12.150208    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:12.161951    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:12.161964    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:12.174783    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:12.174798    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:12.953403    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:14.699430    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:17.955783    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:17.955984    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:17.972084    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:17.972174    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:17.984137    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:17.984203    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:17.999530    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:17.999604    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:18.009681    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:18.009756    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:18.020293    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:18.020354    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:18.030832    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:18.030904    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:18.041010    5456 logs.go:276] 0 containers: []
	W0910 11:15:18.041021    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:18.041087    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:18.051601    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:18.051619    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:18.051625    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:18.090025    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:18.090042    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:18.106260    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:18.106271    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:18.118360    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:18.118372    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:18.136268    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:18.136278    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:18.147532    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:18.147541    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:18.158871    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:18.158881    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:18.163184    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:18.163194    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:18.197833    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:18.197844    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:18.212398    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:18.212409    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:18.227111    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:18.227122    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:18.265291    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:18.265302    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:18.277692    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:18.277704    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:18.290463    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:18.290478    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:18.301923    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:18.301933    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:18.325352    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:18.325360    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:18.337608    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:18.337619    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:20.853964    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:19.701831    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:19.702214    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:19.738859    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:19.739014    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:19.759675    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:19.759766    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:19.774763    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:19.774848    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:19.787232    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:19.787297    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:19.797913    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:19.797981    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:19.808441    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:19.808504    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:19.819116    5250 logs.go:276] 0 containers: []
	W0910 11:15:19.819127    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:19.819190    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:19.829711    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:19.829729    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:19.829735    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:19.847175    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:19.847186    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:19.883189    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:19.883203    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:19.899564    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:19.899576    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:19.915174    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:19.915189    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:19.927495    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:19.927509    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:19.944392    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:19.944406    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:19.961082    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:19.961093    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:20.001022    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:20.001031    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:20.012422    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:20.012433    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:20.026989    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:20.027002    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:20.050746    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:20.050754    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:20.054900    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:20.054910    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:20.066310    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:20.066332    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:20.079550    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:20.079562    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:25.856298    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:25.856530    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:25.874701    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:25.874802    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:25.892835    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:25.892909    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:25.904428    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:25.904502    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:25.919181    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:25.919251    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:25.929746    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:25.929814    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:25.941214    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:25.941285    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:25.952133    5456 logs.go:276] 0 containers: []
	W0910 11:15:25.952145    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:25.952207    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:25.962683    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:25.962700    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:25.962706    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:25.974246    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:25.974257    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:25.985908    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:25.985919    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:26.003428    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:26.003438    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:26.015174    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:26.015185    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:26.025907    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:26.025919    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:26.030050    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:26.030059    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:26.066004    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:26.066015    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:26.079933    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:26.079943    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:26.094786    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:26.094797    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:26.118152    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:26.118162    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:26.135626    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:26.135638    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:26.149647    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:26.149657    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:26.160998    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:26.161014    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:26.172283    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:26.172294    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:26.209632    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:26.209643    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:26.246854    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:26.246866    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:22.594092    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:28.760426    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:27.596306    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:27.596444    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:27.609310    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:27.609394    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:27.620963    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:27.621031    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:27.631704    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:27.631784    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:27.642243    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:27.642314    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:27.653241    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:27.653313    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:27.664100    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:27.664168    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:27.674798    5250 logs.go:276] 0 containers: []
	W0910 11:15:27.674809    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:27.674864    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:27.689553    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:27.689569    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:27.689574    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:27.702776    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:27.702790    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:27.717004    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:27.717017    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:27.732969    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:27.732983    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:27.750378    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:27.750387    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:27.787772    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:27.787780    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:27.799155    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:27.799165    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:27.811232    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:27.811244    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:27.825893    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:27.825906    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:27.839585    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:27.839595    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:27.863139    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:27.863149    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:27.875004    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:27.875017    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:27.879175    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:27.879183    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:27.913570    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:27.913580    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:27.934927    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:27.934941    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:30.451414    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:33.762690    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:33.762965    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:33.784810    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:33.784914    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:33.800287    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:33.800371    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:33.812760    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:33.812844    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:33.823704    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:33.823773    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:33.834200    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:33.834269    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:33.844365    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:33.844435    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:33.854234    5456 logs.go:276] 0 containers: []
	W0910 11:15:33.854246    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:33.854300    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:33.864741    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:33.864759    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:33.864764    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:33.902283    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:33.902295    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:33.936379    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:33.936393    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:33.974509    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:33.974522    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:33.988634    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:33.988646    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:34.003075    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:34.003092    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:34.015063    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:34.015075    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:34.026736    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:34.026749    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:34.038180    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:34.038190    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:34.054882    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:34.054892    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:34.076751    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:34.076761    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:34.088880    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:34.088892    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:34.092988    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:34.092995    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:34.110131    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:34.110142    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:34.122960    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:34.122971    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:34.137061    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:34.137070    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:34.149080    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:34.149090    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:35.452002    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:35.452147    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:35.471239    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:35.471320    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:35.483052    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:35.483128    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:35.493852    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:35.493922    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:35.504174    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:35.504245    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:35.515230    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:35.515312    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:35.526056    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:35.526117    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:35.540368    5250 logs.go:276] 0 containers: []
	W0910 11:15:35.540380    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:35.540432    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:35.551231    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:35.551248    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:35.551254    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:35.589440    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:35.589451    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:35.593650    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:35.593659    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:35.605735    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:35.605750    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:35.630650    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:35.630660    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:35.641996    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:35.642010    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:35.662334    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:35.662346    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:35.676732    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:35.676746    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:35.690732    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:35.690747    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:35.705518    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:35.705528    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:35.740830    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:35.740844    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:35.753294    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:35.753316    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:35.765183    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:35.765194    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:35.776984    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:35.776998    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:35.794991    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:35.795001    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:36.660978    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:38.308848    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:41.663272    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:41.663657    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:41.700583    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:41.700725    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:41.720380    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:41.720464    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:41.734987    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:41.735064    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:41.747872    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:41.747957    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:41.758952    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:41.759015    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:41.769535    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:41.769596    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:41.779925    5456 logs.go:276] 0 containers: []
	W0910 11:15:41.779937    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:41.779998    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:41.790356    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:41.790372    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:41.790377    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:41.801250    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:41.801260    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:41.813876    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:41.813887    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:41.818006    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:41.818013    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:41.852726    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:41.852739    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:41.872871    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:41.872882    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:41.888719    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:41.888731    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:41.900756    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:41.900768    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:41.923039    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:41.923049    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:41.959109    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:41.959121    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:41.972832    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:41.972847    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:41.991000    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:41.991010    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:42.006817    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:42.006828    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:42.020526    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:42.020536    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:42.065774    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:42.065788    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:42.080944    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:42.080957    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:42.093207    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:42.093217    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:44.606692    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:43.311071    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:43.311221    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:43.322562    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:43.322644    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:43.332952    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:43.333022    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:43.343761    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:43.343835    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:43.358767    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:43.358843    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:43.369009    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:43.369077    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:43.379637    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:43.379705    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:43.390173    5250 logs.go:276] 0 containers: []
	W0910 11:15:43.390187    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:43.390248    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:43.401070    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:43.401091    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:43.401098    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:43.418938    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:43.418951    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:43.430821    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:43.430835    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:43.442877    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:43.442890    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:43.484295    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:43.484310    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:43.499315    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:43.499326    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:43.511837    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:43.511850    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:43.529279    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:43.529295    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:43.543951    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:43.543963    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:43.555906    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:43.555921    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:43.571524    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:43.571536    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:43.595767    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:43.595778    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:43.600163    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:43.600172    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:43.636725    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:43.636738    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:43.651810    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:43.651827    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:46.165585    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:49.608825    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:49.608921    5456 kubeadm.go:597] duration metric: took 4m4.180505459s to restartPrimaryControlPlane
	W0910 11:15:49.609001    5456 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 11:15:49.609035    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0910 11:15:50.631306    5456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.022280666s)
	I0910 11:15:50.631642    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 11:15:50.636764    5456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 11:15:50.639784    5456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 11:15:50.642410    5456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 11:15:50.642416    5456 kubeadm.go:157] found existing configuration files:
	
	I0910 11:15:50.642439    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0910 11:15:50.644787    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 11:15:50.644807    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 11:15:50.647806    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0910 11:15:50.650867    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 11:15:50.650895    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 11:15:50.653320    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0910 11:15:50.656230    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 11:15:50.656249    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 11:15:50.659446    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0910 11:15:50.661969    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 11:15:50.661990    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 11:15:50.664541    5456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 11:15:50.729147    5456 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 11:15:51.167768    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:51.167870    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:51.179186    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:51.179262    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:51.189723    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:51.189799    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:51.201579    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:51.201651    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:51.212306    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:51.212370    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:51.222943    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:51.223016    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:51.233652    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:51.233719    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:51.244118    5250 logs.go:276] 0 containers: []
	W0910 11:15:51.244131    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:51.244197    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:51.255468    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:51.255487    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:51.255494    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:51.267753    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:51.267765    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:51.282440    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:51.282452    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:51.298361    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:51.298373    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:51.324378    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:51.324390    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:51.336850    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:51.336863    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:51.374728    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:51.374739    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:51.391276    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:51.391287    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:51.408286    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:51.408297    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:51.420207    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:51.420217    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:51.425472    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:51.425478    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:51.438661    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:51.438672    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:51.455743    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:51.455752    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:51.467628    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:51.467640    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:51.485826    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:51.485837    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:54.026695    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:57.293364    5456 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0910 11:15:57.293391    5456 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 11:15:57.293426    5456 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 11:15:57.293472    5456 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 11:15:57.293577    5456 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 11:15:57.293674    5456 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 11:15:57.297550    5456 out.go:235]   - Generating certificates and keys ...
	I0910 11:15:57.297587    5456 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 11:15:57.297623    5456 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 11:15:57.297668    5456 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 11:15:57.297711    5456 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 11:15:57.297755    5456 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 11:15:57.297790    5456 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 11:15:57.297827    5456 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 11:15:57.297862    5456 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 11:15:57.297903    5456 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 11:15:57.297942    5456 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 11:15:57.297963    5456 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 11:15:57.297993    5456 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 11:15:57.298023    5456 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 11:15:57.298052    5456 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 11:15:57.298087    5456 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 11:15:57.298121    5456 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 11:15:57.298191    5456 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 11:15:57.298229    5456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 11:15:57.298255    5456 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 11:15:57.298287    5456 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 11:15:57.307692    5456 out.go:235]   - Booting up control plane ...
	I0910 11:15:57.307726    5456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 11:15:57.307757    5456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 11:15:57.307789    5456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 11:15:57.307827    5456 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 11:15:57.307896    5456 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 11:15:57.307930    5456 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501994 seconds
	I0910 11:15:57.307978    5456 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 11:15:57.308035    5456 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 11:15:57.308061    5456 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 11:15:57.308145    5456 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-163000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 11:15:57.308172    5456 kubeadm.go:310] [bootstrap-token] Using token: xg4vz0.n7rwz82vznccqe8o
	I0910 11:15:57.311784    5456 out.go:235]   - Configuring RBAC rules ...
	I0910 11:15:57.311838    5456 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 11:15:57.311890    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 11:15:57.311963    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 11:15:57.312035    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 11:15:57.312095    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 11:15:57.312141    5456 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 11:15:57.312203    5456 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 11:15:57.312227    5456 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 11:15:57.312249    5456 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 11:15:57.312251    5456 kubeadm.go:310] 
	I0910 11:15:57.312300    5456 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 11:15:57.312305    5456 kubeadm.go:310] 
	I0910 11:15:57.312345    5456 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 11:15:57.312351    5456 kubeadm.go:310] 
	I0910 11:15:57.312366    5456 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 11:15:57.312393    5456 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 11:15:57.312418    5456 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 11:15:57.312421    5456 kubeadm.go:310] 
	I0910 11:15:57.312452    5456 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 11:15:57.312456    5456 kubeadm.go:310] 
	I0910 11:15:57.312480    5456 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 11:15:57.312483    5456 kubeadm.go:310] 
	I0910 11:15:57.312510    5456 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 11:15:57.312548    5456 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 11:15:57.312592    5456 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 11:15:57.312596    5456 kubeadm.go:310] 
	I0910 11:15:57.312643    5456 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 11:15:57.312681    5456 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 11:15:57.312684    5456 kubeadm.go:310] 
	I0910 11:15:57.312729    5456 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xg4vz0.n7rwz82vznccqe8o \
	I0910 11:15:57.312792    5456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 \
	I0910 11:15:57.312804    5456 kubeadm.go:310] 	--control-plane 
	I0910 11:15:57.312808    5456 kubeadm.go:310] 
	I0910 11:15:57.312856    5456 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 11:15:57.312860    5456 kubeadm.go:310] 
	I0910 11:15:57.312903    5456 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xg4vz0.n7rwz82vznccqe8o \
	I0910 11:15:57.312961    5456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 
	I0910 11:15:57.312967    5456 cni.go:84] Creating CNI manager for ""
	I0910 11:15:57.312974    5456 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:15:57.323750    5456 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 11:15:57.327573    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 11:15:57.332307    5456 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 11:15:57.337681    5456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 11:15:57.337732    5456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 11:15:57.337733    5456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-163000 minikube.k8s.io/updated_at=2024_09_10T11_15_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=stopped-upgrade-163000 minikube.k8s.io/primary=true
	I0910 11:15:57.341487    5456 ops.go:34] apiserver oom_adj: -16
	I0910 11:15:57.372446    5456 kubeadm.go:1113] duration metric: took 34.756583ms to wait for elevateKubeSystemPrivileges
	I0910 11:15:57.382046    5456 kubeadm.go:394] duration metric: took 4m11.967498333s to StartCluster
	I0910 11:15:57.382064    5456 settings.go:142] acquiring lock: {Name:mkc4479acb7c6185024679cd35acf0055f682c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:15:57.382149    5456 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:15:57.382560    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:15:57.382809    5456 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:15:57.382815    5456 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 11:15:57.382862    5456 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-163000"
	I0910 11:15:57.382884    5456 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-163000"
	W0910 11:15:57.382890    5456 addons.go:243] addon storage-provisioner should already be in state true
	I0910 11:15:57.382892    5456 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-163000"
	I0910 11:15:57.382904    5456 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0910 11:15:57.382908    5456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-163000"
	I0910 11:15:57.382921    5456 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:15:57.383856    5456 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10692e200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 11:15:57.383983    5456 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-163000"
	W0910 11:15:57.383988    5456 addons.go:243] addon default-storageclass should already be in state true
	I0910 11:15:57.383995    5456 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0910 11:15:57.385673    5456 out.go:177] * Verifying Kubernetes components...
	I0910 11:15:57.386058    5456 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 11:15:57.389864    5456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 11:15:57.389871    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:15:57.393726    5456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:15:57.397684    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:15:57.401748    5456 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:15:57.401754    5456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 11:15:57.401760    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:15:57.468983    5456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 11:15:57.474411    5456 api_server.go:52] waiting for apiserver process to appear ...
	I0910 11:15:57.474458    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:15:57.478592    5456 api_server.go:72] duration metric: took 95.775583ms to wait for apiserver process to appear ...
	I0910 11:15:57.478602    5456 api_server.go:88] waiting for apiserver healthz status ...
	I0910 11:15:57.478609    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:57.484779    5456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 11:15:57.506254    5456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:15:57.845897    5456 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 11:15:57.845910    5456 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 11:15:59.028789    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:59.028897    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:59.040381    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:15:59.040456    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:59.050586    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:15:59.050658    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:59.061033    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:15:59.061097    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:59.075062    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:15:59.075137    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:59.085659    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:15:59.085731    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:59.097020    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:15:59.097091    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:59.108225    5250 logs.go:276] 0 containers: []
	W0910 11:15:59.108237    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:59.108297    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:59.119979    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:15:59.119997    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:15:59.120002    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:15:59.133708    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:15:59.133721    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:15:59.151501    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:15:59.151515    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:15:59.168670    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:15:59.168681    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:15:59.183216    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:15:59.183227    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:15:59.195066    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:15:59.195081    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:15:59.213969    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:15:59.213980    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:15:59.232525    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:59.232538    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:59.256496    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:15:59.256508    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:59.268492    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:59.268504    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:59.304626    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:15:59.304638    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:15:59.318175    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:15:59.318190    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:15:59.336027    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:15:59.336043    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:15:59.351457    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:59.351468    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:59.389582    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:59.389595    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:01.895226    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:02.479966    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:02.480010    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:06.897537    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:06.898096    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:06.929049    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:06.929180    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:06.948261    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:06.948347    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:06.963405    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:06.963487    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:06.974928    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:06.974997    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:06.986955    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:06.987025    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:06.997567    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:06.997639    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:07.009180    5250 logs.go:276] 0 containers: []
	W0910 11:16:07.009191    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:07.009249    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:07.019900    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:07.019920    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:07.019925    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:07.031383    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:07.031394    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:07.046964    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:07.046974    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:07.058924    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:07.058935    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:07.063597    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:07.063606    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:07.077228    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:07.077237    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:07.088966    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:07.088976    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:07.100822    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:07.100832    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:07.120871    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:07.120882    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:07.155505    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:07.155517    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:07.170352    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:07.170362    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:07.184325    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:07.184336    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:07.210296    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:07.210307    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:07.480214    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:07.480243    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:07.250507    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:07.250522    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:07.262968    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:07.262979    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:09.775225    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:12.480447    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:12.480468    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:14.776244    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:14.776492    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:14.800484    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:14.800603    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:14.817266    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:14.817346    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:14.829923    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:14.829997    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:14.844725    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:14.844816    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:14.855751    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:14.855838    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:14.867347    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:14.867417    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:14.877858    5250 logs.go:276] 0 containers: []
	W0910 11:16:14.877873    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:14.877934    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:14.888703    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:14.888721    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:14.888727    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:14.893439    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:14.893446    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:14.905836    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:14.905851    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:14.921045    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:14.921058    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:14.934396    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:14.934409    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:14.957906    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:14.957919    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:14.995881    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:14.995891    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:15.007870    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:15.007880    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:15.029703    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:15.029714    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:15.041571    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:15.041582    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:15.053570    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:15.053585    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:15.089251    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:15.089262    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:15.104569    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:15.104581    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:15.123952    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:15.123967    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:15.135703    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:15.135713    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:17.480604    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:17.480644    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:17.649467    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:22.481355    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:22.481381    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:22.651612    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:22.651753    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:22.663978    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:22.664048    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:22.674862    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:22.674940    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:22.685327    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:22.685394    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:22.695707    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:22.695778    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:22.705515    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:22.705587    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:22.717035    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:22.717099    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:22.727801    5250 logs.go:276] 0 containers: []
	W0910 11:16:22.727813    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:22.727871    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:22.737895    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:22.737916    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:22.737922    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:22.752291    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:22.752302    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:22.767207    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:22.767218    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:22.779518    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:22.779532    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:22.790889    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:22.790900    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:22.795346    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:22.795355    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:22.833499    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:22.833514    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:22.845130    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:22.845141    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:22.856184    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:22.856197    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:22.872829    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:22.872841    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:22.897066    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:22.897077    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:22.908879    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:22.908892    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:22.947881    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:22.947898    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:22.962252    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:22.962265    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:22.977322    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:22.977337    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:25.495795    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:27.481848    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:27.481872    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0910 11:16:27.847599    5456 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0910 11:16:27.851001    5456 out.go:177] * Enabled addons: storage-provisioner
	I0910 11:16:27.858791    5456 addons.go:510] duration metric: took 30.476781625s for enable addons: enabled=[storage-provisioner]
	I0910 11:16:30.497247    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:30.497340    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:30.508792    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:30.508873    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:30.520209    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:30.520277    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:30.530375    5250 logs.go:276] 4 containers: [fe45ed23e090 de0d9e14794e 7e18ed854af8 7fb3f2c0be6a]
	I0910 11:16:30.530439    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:30.542990    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:30.543067    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:30.556839    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:30.556907    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:30.567349    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:30.567418    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:30.578286    5250 logs.go:276] 0 containers: []
	W0910 11:16:30.578298    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:30.578361    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:30.595627    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:30.595648    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:30.595654    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:30.607600    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:30.607612    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:30.620156    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:30.620169    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:30.632001    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:30.632010    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:30.636321    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:30.636330    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:30.651770    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:30.651783    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:30.663502    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:30.663514    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:30.681217    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:30.681228    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:30.703102    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:30.703111    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:30.714936    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:30.714949    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:30.754120    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:30.754128    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:30.789172    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:30.789186    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:30.800932    5250 logs.go:123] Gathering logs for coredns [7fb3f2c0be6a] ...
	I0910 11:16:30.800945    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb3f2c0be6a"
	I0910 11:16:30.812814    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:30.812824    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:30.831209    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:30.831223    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:32.482496    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:32.482533    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:33.357230    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:37.483564    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:37.483595    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:38.359458    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:38.359860    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:38.400041    5250 logs.go:276] 1 containers: [6c17780cae1a]
	I0910 11:16:38.400189    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:38.428078    5250 logs.go:276] 1 containers: [11d13bdd6ad1]
	I0910 11:16:38.428157    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:38.442010    5250 logs.go:276] 4 containers: [82e428ee9c3d fe45ed23e090 de0d9e14794e 7e18ed854af8]
	I0910 11:16:38.442090    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:38.453645    5250 logs.go:276] 1 containers: [4bbd3f9aef85]
	I0910 11:16:38.453716    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:38.464549    5250 logs.go:276] 1 containers: [4c4d5f351726]
	I0910 11:16:38.464621    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:38.475376    5250 logs.go:276] 1 containers: [13e49144c84c]
	I0910 11:16:38.475450    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:38.488263    5250 logs.go:276] 0 containers: []
	W0910 11:16:38.488275    5250 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:38.488336    5250 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:38.500988    5250 logs.go:276] 1 containers: [2d6749f329f9]
	I0910 11:16:38.501005    5250 logs.go:123] Gathering logs for coredns [7e18ed854af8] ...
	I0910 11:16:38.501010    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e18ed854af8"
	I0910 11:16:38.512498    5250 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:38.512507    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:38.553182    5250 logs.go:123] Gathering logs for kube-apiserver [6c17780cae1a] ...
	I0910 11:16:38.553194    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c17780cae1a"
	I0910 11:16:38.567445    5250 logs.go:123] Gathering logs for etcd [11d13bdd6ad1] ...
	I0910 11:16:38.567458    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d13bdd6ad1"
	I0910 11:16:38.581585    5250 logs.go:123] Gathering logs for coredns [fe45ed23e090] ...
	I0910 11:16:38.581597    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe45ed23e090"
	I0910 11:16:38.593595    5250 logs.go:123] Gathering logs for kube-proxy [4c4d5f351726] ...
	I0910 11:16:38.593605    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c4d5f351726"
	I0910 11:16:38.606163    5250 logs.go:123] Gathering logs for kube-controller-manager [13e49144c84c] ...
	I0910 11:16:38.606174    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13e49144c84c"
	I0910 11:16:38.630771    5250 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:38.630780    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:38.654784    5250 logs.go:123] Gathering logs for container status ...
	I0910 11:16:38.654792    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:38.666857    5250 logs.go:123] Gathering logs for coredns [82e428ee9c3d] ...
	I0910 11:16:38.666867    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82e428ee9c3d"
	I0910 11:16:38.679461    5250 logs.go:123] Gathering logs for storage-provisioner [2d6749f329f9] ...
	I0910 11:16:38.679476    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6749f329f9"
	I0910 11:16:38.691737    5250 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:38.691751    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:38.730513    5250 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:38.730521    5250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:16:38.734995    5250 logs.go:123] Gathering logs for coredns [de0d9e14794e] ...
	I0910 11:16:38.735001    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d9e14794e"
	I0910 11:16:38.749030    5250 logs.go:123] Gathering logs for kube-scheduler [4bbd3f9aef85] ...
	I0910 11:16:38.749040    5250 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbd3f9aef85"
	I0910 11:16:41.265998    5250 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:46.268133    5250 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:46.272031    5250 out.go:201] 
	W0910 11:16:46.276617    5250 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0910 11:16:46.276633    5250 out.go:270] * 
	W0910 11:16:46.277605    5250 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:16:46.292727    5250 out.go:201] 
	I0910 11:16:42.484903    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:42.484959    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:47.486486    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:47.486575    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:52.488584    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:52.488649    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:57.490905    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:57.491010    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:57.505323    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:16:57.505402    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:57.517907    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:16:57.517985    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:57.532215    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:16:57.532284    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:57.543427    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:16:57.543501    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:57.555232    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:16:57.555314    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:57.568551    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:16:57.568623    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:57.578812    5456 logs.go:276] 0 containers: []
	W0910 11:16:57.578826    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:57.578889    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:57.589604    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:16:57.589619    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:16:57.589625    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:16:57.605214    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:16:57.605226    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:16:57.621250    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:16:57.621263    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:16:57.633109    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:57.633122    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:57.656730    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:16:57.656739    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:57.668966    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:57.668978    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:57.707405    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:16:57.707417    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:16:57.719102    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:16:57.719113    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:16:57.733868    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:16:57.733883    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:16:57.748916    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:16:57.748928    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:16:57.761653    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:16:57.761664    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:16:57.779500    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:57.779510    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:57.814259    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:57.814269    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:00.320606    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-09-10 18:07:29 UTC, ends at Tue 2024-09-10 18:17:02 UTC. --
	Sep 10 18:16:46 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:46Z" level=error msg="ContainerStats resp: {0x40008c2940 linux}"
	Sep 10 18:16:46 running-upgrade-978000 dockerd[3245]: time="2024-09-10T18:16:46.756093691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 18:16:46 running-upgrade-978000 dockerd[3245]: time="2024-09-10T18:16:46.756122607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 18:16:46 running-upgrade-978000 dockerd[3245]: time="2024-09-10T18:16:46.756193646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 18:16:46 running-upgrade-978000 dockerd[3245]: time="2024-09-10T18:16:46.756320974Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/563a7ece61839a62b7e9c4f402bba3f2780db2993c95f51fef89ac0cf6fae13c pid=18916 runtime=io.containerd.runc.v2
	Sep 10 18:16:47 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 10 18:16:47 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:47Z" level=error msg="ContainerStats resp: {0x400072f700 linux}"
	Sep 10 18:16:48 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:48Z" level=error msg="ContainerStats resp: {0x400074ae40 linux}"
	Sep 10 18:16:48 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:48Z" level=error msg="ContainerStats resp: {0x400007e600 linux}"
	Sep 10 18:16:48 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:48Z" level=error msg="ContainerStats resp: {0x400074b380 linux}"
	Sep 10 18:16:48 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:48Z" level=error msg="ContainerStats resp: {0x4000ab0300 linux}"
	Sep 10 18:16:48 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:48Z" level=error msg="ContainerStats resp: {0x4000ab0680 linux}"
	Sep 10 18:16:48 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:48Z" level=error msg="ContainerStats resp: {0x4000ab0e00 linux}"
	Sep 10 18:16:52 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 10 18:16:57 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 10 18:16:58 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:58Z" level=error msg="ContainerStats resp: {0x400007ea40 linux}"
	Sep 10 18:16:58 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:58Z" level=error msg="ContainerStats resp: {0x40008b46c0 linux}"
	Sep 10 18:16:59 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:16:59Z" level=error msg="ContainerStats resp: {0x4000a068c0 linux}"
	Sep 10 18:17:00 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:17:00Z" level=error msg="ContainerStats resp: {0x4000a077c0 linux}"
	Sep 10 18:17:00 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:17:00Z" level=error msg="ContainerStats resp: {0x4000a07dc0 linux}"
	Sep 10 18:17:00 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:17:00Z" level=error msg="ContainerStats resp: {0x40000b7c40 linux}"
	Sep 10 18:17:00 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:17:00Z" level=error msg="ContainerStats resp: {0x40008b5b00 linux}"
	Sep 10 18:17:00 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:17:00Z" level=error msg="ContainerStats resp: {0x4000ab06c0 linux}"
	Sep 10 18:17:00 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:17:00Z" level=error msg="ContainerStats resp: {0x400074a8c0 linux}"
	Sep 10 18:17:00 running-upgrade-978000 cri-dockerd[3088]: time="2024-09-10T18:17:00Z" level=error msg="ContainerStats resp: {0x4000ab1080 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	563a7ece61839       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   17eaa18f8e4a0
	82e428ee9c3d2       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   846b878bb2f59
	fe45ed23e0908       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   17eaa18f8e4a0
	de0d9e14794e6       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   846b878bb2f59
	4c4d5f351726c       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   adb12ce30910b
	2d6749f329f94       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   f93db89827987
	6c17780cae1a0       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   7e1bfaea668ba
	13e49144c84c6       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   64c0d28bbc236
	4bbd3f9aef85d       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   060be19afb66d
	11d13bdd6ad12       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   f1dc1442cea18
	
	
	==> coredns [563a7ece6183] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 189916885707830414.6382611089176197009. HINFO: read udp 10.244.0.2:36341->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 189916885707830414.6382611089176197009. HINFO: read udp 10.244.0.2:59209->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 189916885707830414.6382611089176197009. HINFO: read udp 10.244.0.2:40983->10.0.2.3:53: i/o timeout
	
	
	==> coredns [82e428ee9c3d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6093595108644079669.4255291639144441148. HINFO: read udp 10.244.0.3:46150->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6093595108644079669.4255291639144441148. HINFO: read udp 10.244.0.3:37623->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6093595108644079669.4255291639144441148. HINFO: read udp 10.244.0.3:51622->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6093595108644079669.4255291639144441148. HINFO: read udp 10.244.0.3:35507->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6093595108644079669.4255291639144441148. HINFO: read udp 10.244.0.3:49381->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6093595108644079669.4255291639144441148. HINFO: read udp 10.244.0.3:34118->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6093595108644079669.4255291639144441148. HINFO: read udp 10.244.0.3:60595->10.0.2.3:53: i/o timeout
	
	
	==> coredns [de0d9e14794e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:33234->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:35925->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:46188->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:56356->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:58172->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:49467->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:44516->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:52905->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:36391->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2975020077388452237.7238685639958513273. HINFO: read udp 10.244.0.3:46733->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fe45ed23e090] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:46821->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:52402->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:42497->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:39070->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:60262->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:36327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:45407->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:51085->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:55878->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6311060434089527316.332314439826948688. HINFO: read udp 10.244.0.2:47414->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-978000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-978000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=running-upgrade-978000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T11_12_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:12:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-978000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:16:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:12:45 +0000   Tue, 10 Sep 2024 18:12:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:12:45 +0000   Tue, 10 Sep 2024 18:12:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:12:45 +0000   Tue, 10 Sep 2024 18:12:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:12:45 +0000   Tue, 10 Sep 2024 18:12:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-978000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 08cec965048240a49f99ba0c3b04cf6d
	  System UUID:                08cec965048240a49f99ba0c3b04cf6d
	  Boot ID:                    d283d3a8-d0eb-498a-b4ca-64f4444f6f6c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7v2nb                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-qwzr2                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-978000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-978000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-978000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-vwl7k                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-978000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-978000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-978000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-978000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-978000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-978000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-978000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-978000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-978000 event: Registered Node running-upgrade-978000 in Controller
	
	
	==> dmesg <==
	[  +1.541763] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.069943] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.083597] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.134620] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.079113] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.083720] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +1.970395] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[Sep10 18:08] systemd-fstab-generator[1982]: Ignoring "noauto" for root device
	[  +2.742862] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +0.150017] systemd-fstab-generator[2300]: Ignoring "noauto" for root device
	[  +0.091426] systemd-fstab-generator[2311]: Ignoring "noauto" for root device
	[  +0.090522] systemd-fstab-generator[2324]: Ignoring "noauto" for root device
	[ +12.421046] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.212350] systemd-fstab-generator[3043]: Ignoring "noauto" for root device
	[  +0.092223] systemd-fstab-generator[3056]: Ignoring "noauto" for root device
	[  +0.075572] systemd-fstab-generator[3067]: Ignoring "noauto" for root device
	[  +0.091417] systemd-fstab-generator[3081]: Ignoring "noauto" for root device
	[  +2.322655] systemd-fstab-generator[3232]: Ignoring "noauto" for root device
	[  +3.989538] systemd-fstab-generator[3602]: Ignoring "noauto" for root device
	[  +0.934302] systemd-fstab-generator[3731]: Ignoring "noauto" for root device
	[ +19.800950] kauditd_printk_skb: 68 callbacks suppressed
	[Sep10 18:12] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.631997] systemd-fstab-generator[11896]: Ignoring "noauto" for root device
	[  +5.632739] systemd-fstab-generator[12498]: Ignoring "noauto" for root device
	[  +0.465611] systemd-fstab-generator[12630]: Ignoring "noauto" for root device
	
	
	==> etcd [11d13bdd6ad1] <==
	{"level":"info","ts":"2024-09-10T18:12:40.548Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:12:40.548Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T18:12:40.548Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-10T18:12:40.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-10T18:12:40.548Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-10T18:12:40.548Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-10T18:12:40.548Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-10T18:12:40.747Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-978000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:12:40.748Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:12:40.748Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-10T18:12:40.748Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:12:40.748Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:12:40.749Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:12:40.751Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:12:40.751Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:12:40.799Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:12:40.799Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:12:40.799Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:17:02 up 9 min,  0 users,  load average: 0.45, 0.34, 0.16
	Linux running-upgrade-978000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6c17780cae1a] <==
	I0910 18:12:42.483334       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0910 18:12:42.483385       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0910 18:12:42.483775       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:12:42.484249       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0910 18:12:42.491430       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0910 18:12:42.493036       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:12:42.497979       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0910 18:12:43.214685       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0910 18:12:43.392888       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0910 18:12:43.398420       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0910 18:12:43.398499       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0910 18:12:43.562934       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 18:12:43.572755       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 18:12:43.648286       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0910 18:12:43.650394       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0910 18:12:43.650715       1 controller.go:611] quota admission added evaluator for: endpoints
	I0910 18:12:43.652114       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:12:44.518668       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0910 18:12:45.146033       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0910 18:12:45.151409       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0910 18:12:45.160994       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0910 18:12:45.208984       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:12:58.541322       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0910 18:12:58.643651       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0910 18:12:59.178998       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [13e49144c84c] <==
	I0910 18:12:57.988018       1 shared_informer.go:262] Caches are synced for TTL
	I0910 18:12:57.988962       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0910 18:12:57.990055       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0910 18:12:57.990086       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0910 18:12:57.990118       1 shared_informer.go:262] Caches are synced for persistent volume
	I0910 18:12:57.990099       1 shared_informer.go:262] Caches are synced for stateful set
	I0910 18:12:57.990073       1 shared_informer.go:262] Caches are synced for service account
	I0910 18:12:57.990077       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0910 18:12:57.990080       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0910 18:12:57.990060       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0910 18:12:57.995163       1 shared_informer.go:262] Caches are synced for attach detach
	I0910 18:12:58.040235       1 shared_informer.go:262] Caches are synced for job
	I0910 18:12:58.040236       1 shared_informer.go:262] Caches are synced for cronjob
	I0910 18:12:58.090429       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0910 18:12:58.143430       1 shared_informer.go:262] Caches are synced for resource quota
	I0910 18:12:58.201243       1 shared_informer.go:262] Caches are synced for resource quota
	I0910 18:12:58.233500       1 shared_informer.go:262] Caches are synced for disruption
	I0910 18:12:58.233602       1 disruption.go:371] Sending events to api server.
	I0910 18:12:58.543590       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0910 18:12:58.604165       1 shared_informer.go:262] Caches are synced for garbage collector
	I0910 18:12:58.639696       1 shared_informer.go:262] Caches are synced for garbage collector
	I0910 18:12:58.639766       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0910 18:12:58.646554       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vwl7k"
	I0910 18:12:58.992358       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7v2nb"
	I0910 18:12:58.996481       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-qwzr2"
	
	
	==> kube-proxy [4c4d5f351726] <==
	I0910 18:12:59.168128       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0910 18:12:59.168151       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0910 18:12:59.168160       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0910 18:12:59.176855       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0910 18:12:59.176866       1 server_others.go:206] "Using iptables Proxier"
	I0910 18:12:59.176877       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0910 18:12:59.176980       1 server.go:661] "Version info" version="v1.24.1"
	I0910 18:12:59.176985       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:12:59.177548       1 config.go:317] "Starting service config controller"
	I0910 18:12:59.177584       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0910 18:12:59.177623       1 config.go:226] "Starting endpoint slice config controller"
	I0910 18:12:59.177641       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0910 18:12:59.177887       1 config.go:444] "Starting node config controller"
	I0910 18:12:59.177907       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0910 18:12:59.278337       1 shared_informer.go:262] Caches are synced for node config
	I0910 18:12:59.278370       1 shared_informer.go:262] Caches are synced for service config
	I0910 18:12:59.278394       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4bbd3f9aef85] <==
	W0910 18:12:42.444701       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 18:12:42.444734       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0910 18:12:42.444766       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:12:42.445709       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0910 18:12:42.445839       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 18:12:42.445899       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0910 18:12:42.445940       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 18:12:42.445988       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0910 18:12:42.446023       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 18:12:42.446043       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0910 18:12:42.446102       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 18:12:42.446122       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0910 18:12:42.446151       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 18:12:42.446191       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0910 18:12:42.446218       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 18:12:42.446227       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0910 18:12:42.446601       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 18:12:42.446610       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0910 18:12:43.259201       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 18:12:43.259294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0910 18:12:43.286455       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 18:12:43.286566       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0910 18:12:43.444435       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 18:12:43.444533       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0910 18:12:43.841165       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-09-10 18:07:29 UTC, ends at Tue 2024-09-10 18:17:02 UTC. --
	Sep 10 18:12:46 running-upgrade-978000 kubelet[12504]: E0910 18:12:46.979315   12504 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-978000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-978000"
	Sep 10 18:12:47 running-upgrade-978000 kubelet[12504]: E0910 18:12:47.178043   12504 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-978000\" already exists" pod="kube-system/etcd-running-upgrade-978000"
	Sep 10 18:12:47 running-upgrade-978000 kubelet[12504]: I0910 18:12:47.378172   12504 request.go:601] Waited for 1.121716211s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 10 18:12:47 running-upgrade-978000 kubelet[12504]: E0910 18:12:47.381049   12504 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-978000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-978000"
	Sep 10 18:12:57 running-upgrade-978000 kubelet[12504]: I0910 18:12:57.945918   12504 topology_manager.go:200] "Topology Admit Handler"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.026185   12504 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.026573   12504 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.127127   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d768dbd-b48d-4dcd-b19d-25c4c5b8915c-tmp\") pod \"storage-provisioner\" (UID: \"5d768dbd-b48d-4dcd-b19d-25c4c5b8915c\") " pod="kube-system/storage-provisioner"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.127150   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvdn4\" (UniqueName: \"kubernetes.io/projected/5d768dbd-b48d-4dcd-b19d-25c4c5b8915c-kube-api-access-wvdn4\") pod \"storage-provisioner\" (UID: \"5d768dbd-b48d-4dcd-b19d-25c4c5b8915c\") " pod="kube-system/storage-provisioner"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: E0910 18:12:58.230734   12504 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: E0910 18:12:58.230751   12504 projected.go:192] Error preparing data for projected volume kube-api-access-wvdn4 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: E0910 18:12:58.230787   12504 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/5d768dbd-b48d-4dcd-b19d-25c4c5b8915c-kube-api-access-wvdn4 podName:5d768dbd-b48d-4dcd-b19d-25c4c5b8915c nodeName:}" failed. No retries permitted until 2024-09-10 18:12:58.730772566 +0000 UTC m=+13.594393686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wvdn4" (UniqueName: "kubernetes.io/projected/5d768dbd-b48d-4dcd-b19d-25c4c5b8915c-kube-api-access-wvdn4") pod "storage-provisioner" (UID: "5d768dbd-b48d-4dcd-b19d-25c4c5b8915c") : configmap "kube-root-ca.crt" not found
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.649716   12504 topology_manager.go:200] "Topology Admit Handler"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.834813   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0680628-2e72-4417-b6dd-15d8f3fa1b73-kube-proxy\") pod \"kube-proxy-vwl7k\" (UID: \"c0680628-2e72-4417-b6dd-15d8f3fa1b73\") " pod="kube-system/kube-proxy-vwl7k"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.834849   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm88p\" (UniqueName: \"kubernetes.io/projected/c0680628-2e72-4417-b6dd-15d8f3fa1b73-kube-api-access-fm88p\") pod \"kube-proxy-vwl7k\" (UID: \"c0680628-2e72-4417-b6dd-15d8f3fa1b73\") " pod="kube-system/kube-proxy-vwl7k"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.834862   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0680628-2e72-4417-b6dd-15d8f3fa1b73-lib-modules\") pod \"kube-proxy-vwl7k\" (UID: \"c0680628-2e72-4417-b6dd-15d8f3fa1b73\") " pod="kube-system/kube-proxy-vwl7k"
	Sep 10 18:12:58 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.834872   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0680628-2e72-4417-b6dd-15d8f3fa1b73-xtables-lock\") pod \"kube-proxy-vwl7k\" (UID: \"c0680628-2e72-4417-b6dd-15d8f3fa1b73\") " pod="kube-system/kube-proxy-vwl7k"
	Sep 10 18:12:59 running-upgrade-978000 kubelet[12504]: I0910 18:12:58.996847   12504 topology_manager.go:200] "Topology Admit Handler"
	Sep 10 18:12:59 running-upgrade-978000 kubelet[12504]: I0910 18:12:59.006288   12504 topology_manager.go:200] "Topology Admit Handler"
	Sep 10 18:12:59 running-upgrade-978000 kubelet[12504]: I0910 18:12:59.136456   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnwk\" (UniqueName: \"kubernetes.io/projected/b0881c23-9008-466d-933e-72b384d4bb3c-kube-api-access-vnnwk\") pod \"coredns-6d4b75cb6d-7v2nb\" (UID: \"b0881c23-9008-466d-933e-72b384d4bb3c\") " pod="kube-system/coredns-6d4b75cb6d-7v2nb"
	Sep 10 18:12:59 running-upgrade-978000 kubelet[12504]: I0910 18:12:59.136486   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0881c23-9008-466d-933e-72b384d4bb3c-config-volume\") pod \"coredns-6d4b75cb6d-7v2nb\" (UID: \"b0881c23-9008-466d-933e-72b384d4bb3c\") " pod="kube-system/coredns-6d4b75cb6d-7v2nb"
	Sep 10 18:12:59 running-upgrade-978000 kubelet[12504]: I0910 18:12:59.136499   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a441ed2-3492-44f5-ade0-dd84241a3963-config-volume\") pod \"coredns-6d4b75cb6d-qwzr2\" (UID: \"3a441ed2-3492-44f5-ade0-dd84241a3963\") " pod="kube-system/coredns-6d4b75cb6d-qwzr2"
	Sep 10 18:12:59 running-upgrade-978000 kubelet[12504]: I0910 18:12:59.136510   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnsbs\" (UniqueName: \"kubernetes.io/projected/3a441ed2-3492-44f5-ade0-dd84241a3963-kube-api-access-dnsbs\") pod \"coredns-6d4b75cb6d-qwzr2\" (UID: \"3a441ed2-3492-44f5-ade0-dd84241a3963\") " pod="kube-system/coredns-6d4b75cb6d-qwzr2"
	Sep 10 18:16:37 running-upgrade-978000 kubelet[12504]: I0910 18:16:37.610901   12504 scope.go:110] "RemoveContainer" containerID="7fb3f2c0be6a5be5e8efb53fe591630ee11266b76c47dd46339025a55cfa3aeb"
	Sep 10 18:16:47 running-upgrade-978000 kubelet[12504]: I0910 18:16:47.659088   12504 scope.go:110] "RemoveContainer" containerID="7e18ed854af85c31ea066472930e3ef7f604de52e364838427b9d8e3b0d3ab3f"
	
	
	==> storage-provisioner [2d6749f329f9] <==
	I0910 18:12:59.107304       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 18:12:59.115848       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 18:12:59.115898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 18:12:59.121088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 18:12:59.121715       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-978000_db85fafd-bbe0-42e0-8786-f5107bb06c32!
	I0910 18:12:59.122699       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9faba996-2b6e-4ddf-8471-d506f0fdfe2b", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-978000_db85fafd-bbe0-42e0-8786-f5107bb06c32 became leader
	I0910 18:12:59.221888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-978000_db85fafd-bbe0-42e0-8786-f5107bb06c32!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-978000 -n running-upgrade-978000
E0910 11:17:07.483615    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-978000 -n running-upgrade-978000: exit status 2 (15.678271542s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-978000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-978000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-978000
--- FAIL: TestRunningBinaryUpgrade (615.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-590000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-590000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.84302375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-590000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-590000" primary control-plane node in "kubernetes-upgrade-590000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-590000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:10:06.381511    5351 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:10:06.381642    5351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:10:06.381646    5351 out.go:358] Setting ErrFile to fd 2...
	I0910 11:10:06.381648    5351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:10:06.381774    5351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:10:06.382849    5351 out.go:352] Setting JSON to false
	I0910 11:10:06.399627    5351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4170,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:10:06.399699    5351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:10:06.406709    5351 out.go:177] * [kubernetes-upgrade-590000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:10:06.414745    5351 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:10:06.414794    5351 notify.go:220] Checking for updates...
	I0910 11:10:06.420720    5351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:10:06.423681    5351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:10:06.426632    5351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:10:06.429706    5351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:10:06.432709    5351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:10:06.436106    5351 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:10:06.436176    5351 config.go:182] Loaded profile config "running-upgrade-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:10:06.436224    5351 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:10:06.440647    5351 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:10:06.447636    5351 start.go:297] selected driver: qemu2
	I0910 11:10:06.447643    5351 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:10:06.447651    5351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:10:06.450047    5351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:10:06.452672    5351 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:10:06.455754    5351 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 11:10:06.455785    5351 cni.go:84] Creating CNI manager for ""
	I0910 11:10:06.455792    5351 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 11:10:06.455815    5351 start.go:340] cluster config:
	{Name:kubernetes-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:10:06.459413    5351 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:10:06.466683    5351 out.go:177] * Starting "kubernetes-upgrade-590000" primary control-plane node in "kubernetes-upgrade-590000" cluster
	I0910 11:10:06.470693    5351 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 11:10:06.470706    5351 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 11:10:06.470713    5351 cache.go:56] Caching tarball of preloaded images
	I0910 11:10:06.470763    5351 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:10:06.470768    5351 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0910 11:10:06.470817    5351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/kubernetes-upgrade-590000/config.json ...
	I0910 11:10:06.470828    5351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/kubernetes-upgrade-590000/config.json: {Name:mke19ea98e307f274409128bc91225d030bb4072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:10:06.471180    5351 start.go:360] acquireMachinesLock for kubernetes-upgrade-590000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:10:06.471222    5351 start.go:364] duration metric: took 27.916µs to acquireMachinesLock for "kubernetes-upgrade-590000"
	I0910 11:10:06.471233    5351 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:10:06.471266    5351 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:10:06.479667    5351 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:10:06.496167    5351 start.go:159] libmachine.API.Create for "kubernetes-upgrade-590000" (driver="qemu2")
	I0910 11:10:06.496194    5351 client.go:168] LocalClient.Create starting
	I0910 11:10:06.496255    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:10:06.496287    5351 main.go:141] libmachine: Decoding PEM data...
	I0910 11:10:06.496300    5351 main.go:141] libmachine: Parsing certificate...
	I0910 11:10:06.496342    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:10:06.496366    5351 main.go:141] libmachine: Decoding PEM data...
	I0910 11:10:06.496373    5351 main.go:141] libmachine: Parsing certificate...
	I0910 11:10:06.496833    5351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:10:06.657856    5351 main.go:141] libmachine: Creating SSH key...
	I0910 11:10:06.748330    5351 main.go:141] libmachine: Creating Disk image...
	I0910 11:10:06.748336    5351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:10:06.748561    5351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:06.757819    5351 main.go:141] libmachine: STDOUT: 
	I0910 11:10:06.757834    5351 main.go:141] libmachine: STDERR: 
	I0910 11:10:06.757891    5351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2 +20000M
	I0910 11:10:06.766019    5351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:10:06.766035    5351 main.go:141] libmachine: STDERR: 
	I0910 11:10:06.766049    5351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:06.766055    5351 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:10:06.766076    5351 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:10:06.766102    5351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:da:c7:17:e2:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:06.767730    5351 main.go:141] libmachine: STDOUT: 
	I0910 11:10:06.767748    5351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:10:06.767768    5351 client.go:171] duration metric: took 271.576167ms to LocalClient.Create
	I0910 11:10:08.769811    5351 start.go:128] duration metric: took 2.298593333s to createHost
	I0910 11:10:08.769842    5351 start.go:83] releasing machines lock for "kubernetes-upgrade-590000", held for 2.298676583s
	W0910 11:10:08.769894    5351 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:10:08.779504    5351 out.go:177] * Deleting "kubernetes-upgrade-590000" in qemu2 ...
	W0910 11:10:08.801823    5351 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:10:08.801830    5351 start.go:729] Will try again in 5 seconds ...
	I0910 11:10:13.803885    5351 start.go:360] acquireMachinesLock for kubernetes-upgrade-590000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:10:13.804417    5351 start.go:364] duration metric: took 444.667µs to acquireMachinesLock for "kubernetes-upgrade-590000"
	I0910 11:10:13.804589    5351 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:10:13.804829    5351 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:10:13.810604    5351 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:10:13.860490    5351 start.go:159] libmachine.API.Create for "kubernetes-upgrade-590000" (driver="qemu2")
	I0910 11:10:13.860541    5351 client.go:168] LocalClient.Create starting
	I0910 11:10:13.860657    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:10:13.860730    5351 main.go:141] libmachine: Decoding PEM data...
	I0910 11:10:13.860744    5351 main.go:141] libmachine: Parsing certificate...
	I0910 11:10:13.860809    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:10:13.860856    5351 main.go:141] libmachine: Decoding PEM data...
	I0910 11:10:13.860866    5351 main.go:141] libmachine: Parsing certificate...
	I0910 11:10:13.861440    5351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:10:14.029713    5351 main.go:141] libmachine: Creating SSH key...
	I0910 11:10:14.129895    5351 main.go:141] libmachine: Creating Disk image...
	I0910 11:10:14.129904    5351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:10:14.130142    5351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:14.139413    5351 main.go:141] libmachine: STDOUT: 
	I0910 11:10:14.139439    5351 main.go:141] libmachine: STDERR: 
	I0910 11:10:14.139482    5351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2 +20000M
	I0910 11:10:14.147496    5351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:10:14.147510    5351 main.go:141] libmachine: STDERR: 
	I0910 11:10:14.147521    5351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:14.147536    5351 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:10:14.147549    5351 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:10:14.147578    5351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:96:79:a2:1b:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:14.149235    5351 main.go:141] libmachine: STDOUT: 
	I0910 11:10:14.149258    5351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:10:14.149269    5351 client.go:171] duration metric: took 288.727917ms to LocalClient.Create
	I0910 11:10:16.151347    5351 start.go:128] duration metric: took 2.34655625s to createHost
	I0910 11:10:16.151396    5351 start.go:83] releasing machines lock for "kubernetes-upgrade-590000", held for 2.347018166s
	W0910 11:10:16.151646    5351 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-590000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-590000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:10:16.160240    5351 out.go:201] 
	W0910 11:10:16.170156    5351 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:10:16.170187    5351 out.go:270] * 
	* 
	W0910 11:10:16.171639    5351 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:10:16.182115    5351 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-590000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-590000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-590000: (3.302898125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-590000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-590000 status --format={{.Host}}: exit status 7 (63.89325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-590000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-590000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.189897s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-590000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-590000" primary control-plane node in "kubernetes-upgrade-590000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-590000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-590000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:10:19.595954    5389 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:10:19.596095    5389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:10:19.596100    5389 out.go:358] Setting ErrFile to fd 2...
	I0910 11:10:19.596102    5389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:10:19.596271    5389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:10:19.597481    5389 out.go:352] Setting JSON to false
	I0910 11:10:19.615827    5389 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4183,"bootTime":1725987636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:10:19.615907    5389 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:10:19.620995    5389 out.go:177] * [kubernetes-upgrade-590000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:10:19.627965    5389 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:10:19.627996    5389 notify.go:220] Checking for updates...
	I0910 11:10:19.633925    5389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:10:19.636964    5389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:10:19.639958    5389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:10:19.642961    5389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:10:19.645962    5389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:10:19.649212    5389 config.go:182] Loaded profile config "kubernetes-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0910 11:10:19.649470    5389 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:10:19.653949    5389 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:10:19.660926    5389 start.go:297] selected driver: qemu2
	I0910 11:10:19.660932    5389 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:10:19.660987    5389 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:10:19.663286    5389 cni.go:84] Creating CNI manager for ""
	I0910 11:10:19.663305    5389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:10:19.663329    5389 start.go:340] cluster config:
	{Name:kubernetes-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-590000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:10:19.666717    5389 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:10:19.673790    5389 out.go:177] * Starting "kubernetes-upgrade-590000" primary control-plane node in "kubernetes-upgrade-590000" cluster
	I0910 11:10:19.677924    5389 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:10:19.677941    5389 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:10:19.677949    5389 cache.go:56] Caching tarball of preloaded images
	I0910 11:10:19.678016    5389 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:10:19.678022    5389 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:10:19.678091    5389 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/kubernetes-upgrade-590000/config.json ...
	I0910 11:10:19.678670    5389 start.go:360] acquireMachinesLock for kubernetes-upgrade-590000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:10:19.678702    5389 start.go:364] duration metric: took 27µs to acquireMachinesLock for "kubernetes-upgrade-590000"
	I0910 11:10:19.678711    5389 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:10:19.678717    5389 fix.go:54] fixHost starting: 
	I0910 11:10:19.678832    5389 fix.go:112] recreateIfNeeded on kubernetes-upgrade-590000: state=Stopped err=<nil>
	W0910 11:10:19.678840    5389 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:10:19.686903    5389 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-590000" ...
	I0910 11:10:19.690990    5389 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:10:19.691023    5389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:96:79:a2:1b:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:19.693038    5389 main.go:141] libmachine: STDOUT: 
	I0910 11:10:19.693059    5389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:10:19.693088    5389 fix.go:56] duration metric: took 14.372959ms for fixHost
	I0910 11:10:19.693100    5389 start.go:83] releasing machines lock for "kubernetes-upgrade-590000", held for 14.385167ms
	W0910 11:10:19.693109    5389 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:10:19.693140    5389 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:10:19.693144    5389 start.go:729] Will try again in 5 seconds ...
	I0910 11:10:24.694001    5389 start.go:360] acquireMachinesLock for kubernetes-upgrade-590000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:10:24.694514    5389 start.go:364] duration metric: took 396.833µs to acquireMachinesLock for "kubernetes-upgrade-590000"
	I0910 11:10:24.694661    5389 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:10:24.694683    5389 fix.go:54] fixHost starting: 
	I0910 11:10:24.695464    5389 fix.go:112] recreateIfNeeded on kubernetes-upgrade-590000: state=Stopped err=<nil>
	W0910 11:10:24.695493    5389 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:10:24.701069    5389 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-590000" ...
	I0910 11:10:24.709971    5389 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:10:24.710332    5389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:96:79:a2:1b:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubernetes-upgrade-590000/disk.qcow2
	I0910 11:10:24.719978    5389 main.go:141] libmachine: STDOUT: 
	I0910 11:10:24.720035    5389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:10:24.720104    5389 fix.go:56] duration metric: took 25.424916ms for fixHost
	I0910 11:10:24.720120    5389 start.go:83] releasing machines lock for "kubernetes-upgrade-590000", held for 25.580167ms
	W0910 11:10:24.720346    5389 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-590000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-590000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:10:24.727943    5389 out.go:201] 
	W0910 11:10:24.731086    5389 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:10:24.731129    5389 out.go:270] * 
	* 
	W0910 11:10:24.733723    5389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:10:24.741038    5389 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-590000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-590000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-590000 version --output=json: exit status 1 (67.531834ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-590000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-09-10 11:10:24.824383 -0700 PDT m=+2524.292768126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-590000 -n kubernetes-upgrade-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-590000 -n kubernetes-upgrade-590000: exit status 7 (33.830792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-590000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-590000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-590000
--- FAIL: TestKubernetesUpgrade (18.59s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19598
- KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1942937717/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.14s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19598
- KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3183665794/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (572.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.227466271 start -p stopped-upgrade-163000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.227466271 start -p stopped-upgrade-163000 --memory=2200 --vm-driver=qemu2 : (38.620776167s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.227466271 -p stopped-upgrade-163000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.227466271 -p stopped-upgrade-163000 stop: (12.121348083s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-163000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0910 11:12:07.491347    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 11:12:20.641584    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-163000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.893305167s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-163000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-163000" primary control-plane node in "stopped-upgrade-163000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-163000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:11:16.627101    5456 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:11:16.627274    5456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:11:16.627278    5456 out.go:358] Setting ErrFile to fd 2...
	I0910 11:11:16.627281    5456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:11:16.627446    5456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:11:16.628827    5456 out.go:352] Setting JSON to false
	I0910 11:11:16.648337    5456 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4240,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:11:16.648404    5456 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:11:16.652863    5456 out.go:177] * [stopped-upgrade-163000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:11:16.660905    5456 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:11:16.660949    5456 notify.go:220] Checking for updates...
	I0910 11:11:16.667827    5456 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:11:16.669284    5456 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:11:16.672775    5456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:11:16.675851    5456 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:11:16.678832    5456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:11:16.682069    5456 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:11:16.685804    5456 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 11:11:16.688982    5456 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:11:16.693762    5456 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:11:16.700845    5456 start.go:297] selected driver: qemu2
	I0910 11:11:16.700851    5456 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:11:16.700901    5456 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:11:16.703754    5456 cni.go:84] Creating CNI manager for ""
	I0910 11:11:16.703780    5456 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:11:16.703810    5456 start.go:340] cluster config:
	{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:11:16.703858    5456 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:11:16.710854    5456 out.go:177] * Starting "stopped-upgrade-163000" primary control-plane node in "stopped-upgrade-163000" cluster
	I0910 11:11:16.713670    5456 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0910 11:11:16.713687    5456 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0910 11:11:16.713695    5456 cache.go:56] Caching tarball of preloaded images
	I0910 11:11:16.713755    5456 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:11:16.713761    5456 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0910 11:11:16.713815    5456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0910 11:11:16.714325    5456 start.go:360] acquireMachinesLock for stopped-upgrade-163000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:11:16.714363    5456 start.go:364] duration metric: took 31.084µs to acquireMachinesLock for "stopped-upgrade-163000"
	I0910 11:11:16.714375    5456 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:11:16.714382    5456 fix.go:54] fixHost starting: 
	I0910 11:11:16.714500    5456 fix.go:112] recreateIfNeeded on stopped-upgrade-163000: state=Stopped err=<nil>
	W0910 11:11:16.714509    5456 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:11:16.722696    5456 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-163000" ...
	I0910 11:11:16.726812    5456 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:11:16.726904    5456 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50494-:22,hostfwd=tcp::50495-:2376,hostname=stopped-upgrade-163000 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/disk.qcow2
	I0910 11:11:16.774396    5456 main.go:141] libmachine: STDOUT: 
	I0910 11:11:16.774424    5456 main.go:141] libmachine: STDERR: 
	I0910 11:11:16.774430    5456 main.go:141] libmachine: Waiting for VM to start (ssh -p 50494 docker@127.0.0.1)...
	I0910 11:11:36.854398    5456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0910 11:11:36.855189    5456 machine.go:93] provisionDockerMachine start ...
	I0910 11:11:36.855355    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:36.855878    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:36.855902    5456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 11:11:36.944734    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 11:11:36.944769    5456 buildroot.go:166] provisioning hostname "stopped-upgrade-163000"
	I0910 11:11:36.944907    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:36.945175    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:36.945186    5456 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-163000 && echo "stopped-upgrade-163000" | sudo tee /etc/hostname
	I0910 11:11:37.025422    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-163000
	
	I0910 11:11:37.025502    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.025664    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.025675    5456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-163000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-163000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-163000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 11:11:37.096436    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 11:11:37.096448    5456 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19598-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19598-1276/.minikube}
	I0910 11:11:37.096461    5456 buildroot.go:174] setting up certificates
	I0910 11:11:37.096466    5456 provision.go:84] configureAuth start
	I0910 11:11:37.096470    5456 provision.go:143] copyHostCerts
	I0910 11:11:37.096552    5456 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem, removing ...
	I0910 11:11:37.096560    5456 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem
	I0910 11:11:37.096671    5456 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/cert.pem (1123 bytes)
	I0910 11:11:37.096853    5456 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem, removing ...
	I0910 11:11:37.096858    5456 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem
	I0910 11:11:37.096909    5456 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/key.pem (1675 bytes)
	I0910 11:11:37.097016    5456 exec_runner.go:144] found /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem, removing ...
	I0910 11:11:37.097022    5456 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem
	I0910 11:11:37.097073    5456 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.pem (1078 bytes)
	I0910 11:11:37.097159    5456 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-163000 san=[127.0.0.1 localhost minikube stopped-upgrade-163000]
	I0910 11:11:37.168755    5456 provision.go:177] copyRemoteCerts
	I0910 11:11:37.168796    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 11:11:37.168803    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:11:37.204235    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 11:11:37.210841    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 11:11:37.217496    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 11:11:37.224716    5456 provision.go:87] duration metric: took 128.248916ms to configureAuth
	I0910 11:11:37.224725    5456 buildroot.go:189] setting minikube options for container-runtime
	I0910 11:11:37.224829    5456 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:11:37.224867    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.224953    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.224963    5456 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 11:11:37.291752    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 11:11:37.291762    5456 buildroot.go:70] root file system type: tmpfs
	I0910 11:11:37.291817    5456 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 11:11:37.291867    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.291997    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.292031    5456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 11:11:37.362722    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 11:11:37.362782    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.362904    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.362914    5456 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 11:11:37.704376    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 11:11:37.704393    5456 machine.go:96] duration metric: took 849.217208ms to provisionDockerMachine
	I0910 11:11:37.704400    5456 start.go:293] postStartSetup for "stopped-upgrade-163000" (driver="qemu2")
	I0910 11:11:37.704407    5456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 11:11:37.704486    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 11:11:37.704498    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:11:37.740433    5456 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 11:11:37.741792    5456 info.go:137] Remote host: Buildroot 2021.02.12
	I0910 11:11:37.741799    5456 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/addons for local assets ...
	I0910 11:11:37.741885    5456 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19598-1276/.minikube/files for local assets ...
	I0910 11:11:37.742009    5456 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem -> 17952.pem in /etc/ssl/certs
	I0910 11:11:37.742137    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 11:11:37.745240    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem --> /etc/ssl/certs/17952.pem (1708 bytes)
	I0910 11:11:37.752673    5456 start.go:296] duration metric: took 48.2695ms for postStartSetup
	I0910 11:11:37.752686    5456 fix.go:56] duration metric: took 21.0388655s for fixHost
	I0910 11:11:37.752719    5456 main.go:141] libmachine: Using SSH client type: native
	I0910 11:11:37.752817    5456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105367ba0] 0x10536a400 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0910 11:11:37.752827    5456 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 11:11:37.818698    5456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991898.281001712
	
	I0910 11:11:37.818705    5456 fix.go:216] guest clock: 1725991898.281001712
	I0910 11:11:37.818709    5456 fix.go:229] Guest: 2024-09-10 11:11:38.281001712 -0700 PDT Remote: 2024-09-10 11:11:37.752688 -0700 PDT m=+21.157374376 (delta=528.313712ms)
	I0910 11:11:37.818720    5456 fix.go:200] guest clock delta is within tolerance: 528.313712ms
	I0910 11:11:37.818723    5456 start.go:83] releasing machines lock for "stopped-upgrade-163000", held for 21.104912084s
	I0910 11:11:37.818786    5456 ssh_runner.go:195] Run: cat /version.json
	I0910 11:11:37.818800    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:11:37.818787    5456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 11:11:37.818857    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	W0910 11:11:37.819408    5456 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50494: connect: connection refused
	I0910 11:11:37.819433    5456 retry.go:31] will retry after 222.187113ms: dial tcp [::1]:50494: connect: connection refused
	W0910 11:11:37.851656    5456 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0910 11:11:37.851712    5456 ssh_runner.go:195] Run: systemctl --version
	I0910 11:11:37.853579    5456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 11:11:37.855332    5456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 11:11:37.855360    5456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0910 11:11:37.858243    5456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0910 11:11:37.863050    5456 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 11:11:37.863059    5456 start.go:495] detecting cgroup driver to use...
	I0910 11:11:37.863130    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 11:11:37.869477    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0910 11:11:37.872729    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 11:11:37.876116    5456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 11:11:37.876142    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 11:11:37.879139    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 11:11:37.882202    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 11:11:37.885361    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 11:11:37.888759    5456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 11:11:37.891932    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 11:11:37.894632    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 11:11:37.897716    5456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 11:11:37.901103    5456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 11:11:37.903909    5456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 11:11:37.906409    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:37.968197    5456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 11:11:37.973819    5456 start.go:495] detecting cgroup driver to use...
	I0910 11:11:37.973871    5456 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 11:11:37.980775    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 11:11:37.986024    5456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 11:11:37.992729    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 11:11:37.997500    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 11:11:38.002356    5456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 11:11:38.033204    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 11:11:38.037985    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 11:11:38.043331    5456 ssh_runner.go:195] Run: which cri-dockerd
	I0910 11:11:38.044567    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 11:11:38.047437    5456 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0910 11:11:38.054273    5456 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 11:11:38.121029    5456 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 11:11:38.338760    5456 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 11:11:38.338862    5456 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 11:11:38.350097    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:38.427403    5456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 11:11:39.535328    5456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.107937875s)
	I0910 11:11:39.535385    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 11:11:39.539887    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 11:11:39.544184    5456 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 11:11:39.608181    5456 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 11:11:39.667055    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:39.736586    5456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 11:11:39.742119    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 11:11:39.747019    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:39.807101    5456 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 11:11:39.843789    5456 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 11:11:39.843865    5456 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 11:11:39.846101    5456 start.go:563] Will wait 60s for crictl version
	I0910 11:11:39.846158    5456 ssh_runner.go:195] Run: which crictl
	I0910 11:11:39.847700    5456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 11:11:39.862093    5456 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0910 11:11:39.862178    5456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 11:11:39.882341    5456 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 11:11:39.901413    5456 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0910 11:11:39.901484    5456 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0910 11:11:39.902734    5456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 11:11:39.906093    5456 kubeadm.go:883] updating cluster {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0910 11:11:39.906141    5456 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0910 11:11:39.906179    5456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 11:11:39.916950    5456 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 11:11:39.916959    5456 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0910 11:11:39.917013    5456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 11:11:39.920661    5456 ssh_runner.go:195] Run: which lz4
	I0910 11:11:39.921978    5456 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 11:11:39.923331    5456 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 11:11:39.923340    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0910 11:11:40.805952    5456 docker.go:649] duration metric: took 884.028041ms to copy over tarball
	I0910 11:11:40.806011    5456 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 11:11:41.965545    5456 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159550792s)
	I0910 11:11:41.965559    5456 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 11:11:41.981558    5456 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 11:11:41.984883    5456 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0910 11:11:41.989972    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:42.056204    5456 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 11:11:43.603339    5456 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.547159292s)
	I0910 11:11:43.603444    5456 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 11:11:43.614093    5456 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 11:11:43.614103    5456 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0910 11:11:43.614108    5456 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 11:11:43.618418    5456 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:43.619874    5456 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:43.622378    5456 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0910 11:11:43.622423    5456 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:43.624016    5456 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:43.624287    5456 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:43.625228    5456 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:43.625495    5456 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0910 11:11:43.626361    5456 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:43.627279    5456 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:43.628166    5456 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:43.628560    5456 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:43.629678    5456 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:43.629685    5456 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:43.630846    5456 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:43.631925    5456 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.524394    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0910 11:11:44.548278    5456 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0910 11:11:44.548322    5456 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0910 11:11:44.548404    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0910 11:11:44.563184    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:44.565086    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0910 11:11:44.565205    5456 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0910 11:11:44.576315    5456 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0910 11:11:44.576321    5456 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0910 11:11:44.576340    5456 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:44.576358    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0910 11:11:44.576389    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0910 11:11:44.592424    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:44.593149    5456 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0910 11:11:44.593166    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0910 11:11:44.593589    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0910 11:11:44.622615    5456 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0910 11:11:44.622624    5456 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0910 11:11:44.622637    5456 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0910 11:11:44.622688    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0910 11:11:44.631604    5456 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0910 11:11:44.631735    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:44.632320    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0910 11:11:44.641793    5456 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0910 11:11:44.641814    5456 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:44.641868    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0910 11:11:44.651647    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0910 11:11:44.651766    5456 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0910 11:11:44.653200    5456 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0910 11:11:44.653215    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0910 11:11:44.695055    5456 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0910 11:11:44.695069    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0910 11:11:44.730974    5456 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0910 11:11:44.748662    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:44.758442    5456 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0910 11:11:44.758461    5456 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:44.758518    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0910 11:11:44.761309    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:44.770277    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.773634    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0910 11:11:44.773664    5456 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0910 11:11:44.773678    5456 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:44.773710    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0910 11:11:44.786866    5456 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0910 11:11:44.786885    5456 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.786915    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0910 11:11:44.786942    5456 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0910 11:11:44.797328    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0910 11:11:44.836489    5456 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0910 11:11:44.836574    5456 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:44.848123    5456 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0910 11:11:44.848143    5456 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:44.848198    5456 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:11:44.862632    5456 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 11:11:44.862757    5456 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 11:11:44.864148    5456 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0910 11:11:44.864165    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0910 11:11:44.893641    5456 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 11:11:44.893653    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0910 11:11:45.135399    5456 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 11:11:45.135434    5456 cache_images.go:92] duration metric: took 1.521359625s to LoadCachedImages
	W0910 11:11:45.135473    5456 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0910 11:11:45.135479    5456 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0910 11:11:45.135536    5456 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-163000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 11:11:45.135593    5456 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 11:11:45.155477    5456 cni.go:84] Creating CNI manager for ""
	I0910 11:11:45.155488    5456 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:11:45.155492    5456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 11:11:45.155500    5456 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-163000 NodeName:stopped-upgrade-163000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 11:11:45.155584    5456 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-163000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 11:11:45.155637    5456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0910 11:11:45.158415    5456 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 11:11:45.158447    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 11:11:45.161482    5456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0910 11:11:45.166719    5456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 11:11:45.172071    5456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0910 11:11:45.177711    5456 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0910 11:11:45.179049    5456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 11:11:45.182495    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:11:45.242826    5456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 11:11:45.252808    5456 certs.go:68] Setting up /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000 for IP: 10.0.2.15
	I0910 11:11:45.252820    5456 certs.go:194] generating shared ca certs ...
	I0910 11:11:45.252829    5456 certs.go:226] acquiring lock for ca certs: {Name:mk5b237e8da18ff05d2622f0be5a14dbe0d9b4f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.253001    5456 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key
	I0910 11:11:45.253051    5456 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key
	I0910 11:11:45.253057    5456 certs.go:256] generating profile certs ...
	I0910 11:11:45.253131    5456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.key
	I0910 11:11:45.253151    5456 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc
	I0910 11:11:45.253162    5456 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0910 11:11:45.296715    5456 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc ...
	I0910 11:11:45.296726    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc: {Name:mk2707e74b1ac3f5acd434d600070bb62d00ad14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.297033    5456 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc ...
	I0910 11:11:45.297038    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc: {Name:mk1364e97b609150ccb4151ef7919a71c67a2736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.297164    5456 certs.go:381] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt
	I0910 11:11:45.297298    5456 certs.go:385] copying /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc -> /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key
	I0910 11:11:45.297455    5456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/proxy-client.key
	I0910 11:11:45.297589    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795.pem (1338 bytes)
	W0910 11:11:45.297622    5456 certs.go:480] ignoring /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795_empty.pem, impossibly tiny 0 bytes
	I0910 11:11:45.297627    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 11:11:45.297650    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem (1078 bytes)
	I0910 11:11:45.297672    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem (1123 bytes)
	I0910 11:11:45.297695    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/key.pem (1675 bytes)
	I0910 11:11:45.297734    5456 certs.go:484] found cert: /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem (1708 bytes)
	I0910 11:11:45.298062    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 11:11:45.305107    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 11:11:45.312227    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 11:11:45.319359    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 11:11:45.326205    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 11:11:45.333296    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 11:11:45.340588    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 11:11:45.347501    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 11:11:45.354259    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 11:11:45.361185    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/1795.pem --> /usr/share/ca-certificates/1795.pem (1338 bytes)
	I0910 11:11:45.368331    5456 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/ssl/certs/17952.pem --> /usr/share/ca-certificates/17952.pem (1708 bytes)
	I0910 11:11:45.374893    5456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 11:11:45.379553    5456 ssh_runner.go:195] Run: openssl version
	I0910 11:11:45.381373    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17952.pem && ln -fs /usr/share/ca-certificates/17952.pem /etc/ssl/certs/17952.pem"
	I0910 11:11:45.384785    5456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17952.pem
	I0910 11:11:45.386239    5456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:44 /usr/share/ca-certificates/17952.pem
	I0910 11:11:45.386258    5456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17952.pem
	I0910 11:11:45.388001    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17952.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 11:11:45.390705    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 11:11:45.393612    5456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:11:45.395105    5456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:11:45.395129    5456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 11:11:45.396784    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 11:11:45.399753    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1795.pem && ln -fs /usr/share/ca-certificates/1795.pem /etc/ssl/certs/1795.pem"
	I0910 11:11:45.402549    5456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1795.pem
	I0910 11:11:45.403847    5456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:44 /usr/share/ca-certificates/1795.pem
	I0910 11:11:45.403863    5456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1795.pem
	I0910 11:11:45.405389    5456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1795.pem /etc/ssl/certs/51391683.0"
	I0910 11:11:45.408598    5456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 11:11:45.410077    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 11:11:45.412151    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 11:11:45.413979    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 11:11:45.415900    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 11:11:45.417663    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 11:11:45.419413    5456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 11:11:45.421231    5456 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0910 11:11:45.421298    5456 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 11:11:45.431780    5456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 11:11:45.434875    5456 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 11:11:45.434881    5456 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 11:11:45.434903    5456 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 11:11:45.438477    5456 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 11:11:45.438783    5456 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-163000" does not appear in /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:11:45.438876    5456 kubeconfig.go:62] /Users/jenkins/minikube-integration/19598-1276/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-163000" cluster setting kubeconfig missing "stopped-upgrade-163000" context setting]
	I0910 11:11:45.439077    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:11:45.439555    5456 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10692e200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 11:11:45.439906    5456 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 11:11:45.442681    5456 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-163000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0910 11:11:45.442686    5456 kubeadm.go:1160] stopping kube-system containers ...
	I0910 11:11:45.442727    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 11:11:45.453186    5456 docker.go:483] Stopping containers: [29be8057a1dd 469710f91457 0871f0cf5a37 8d2c0af3a670 8db99da6a98d 4fd21312b6dc 6555df8fa22d 938546a9d4bc]
	I0910 11:11:45.453250    5456 ssh_runner.go:195] Run: docker stop 29be8057a1dd 469710f91457 0871f0cf5a37 8d2c0af3a670 8db99da6a98d 4fd21312b6dc 6555df8fa22d 938546a9d4bc
	I0910 11:11:45.468465    5456 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 11:11:45.474160    5456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 11:11:45.477246    5456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 11:11:45.477252    5456 kubeadm.go:157] found existing configuration files:
	
	I0910 11:11:45.477278    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0910 11:11:45.480414    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 11:11:45.480437    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 11:11:45.482985    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0910 11:11:45.485441    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 11:11:45.485467    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 11:11:45.488592    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0910 11:11:45.491414    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 11:11:45.491435    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 11:11:45.494114    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0910 11:11:45.497149    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 11:11:45.497173    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 11:11:45.500065    5456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 11:11:45.502666    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:45.524129    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.237683    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.351537    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.381026    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 11:11:46.403256    5456 api_server.go:52] waiting for apiserver process to appear ...
	I0910 11:11:46.403335    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:11:46.905441    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:11:47.405364    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:11:47.409313    5456 api_server.go:72] duration metric: took 1.006086959s to wait for apiserver process to appear ...
	I0910 11:11:47.409322    5456 api_server.go:88] waiting for apiserver healthz status ...
	I0910 11:11:47.409335    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:52.411433    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:52.411536    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:11:57.412238    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:11:57.412319    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:02.413214    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:02.413318    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:07.414176    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:07.414206    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:12.415197    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:12.415278    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:17.416734    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:17.416777    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:22.418892    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:22.418915    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:27.420969    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:27.421046    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:32.423477    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:32.423502    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:37.424829    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:37.424851    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:42.426919    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:42.426963    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:47.429092    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:47.429256    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:47.443860    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:12:47.443927    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:47.455656    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:12:47.455732    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:47.466342    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:12:47.466417    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:47.476707    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:12:47.476781    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:47.487078    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:12:47.487151    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:47.499286    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:12:47.499362    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:47.509912    5456 logs.go:276] 0 containers: []
	W0910 11:12:47.509929    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:47.509992    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:47.520483    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:12:47.520501    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:12:47.520507    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:12:47.531847    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:47.531859    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:47.570276    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:12:47.570287    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:12:47.610246    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:12:47.610257    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:12:47.622311    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:12:47.622323    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:12:47.633738    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:12:47.633749    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:12:47.652180    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:12:47.652193    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:12:47.663556    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:47.663567    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:47.667874    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:47.667881    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:47.742827    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:12:47.742842    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:12:47.759440    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:12:47.759451    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:12:47.773008    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:12:47.773019    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:12:47.791726    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:12:47.791743    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:12:47.804965    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:47.804976    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:47.830162    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:12:47.830175    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:47.842373    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:12:47.842387    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:12:47.857805    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:12:47.857816    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:12:50.370958    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:12:55.371481    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:12:55.371589    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:12:55.382543    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:12:55.382625    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:12:55.392838    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:12:55.392902    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:12:55.403304    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:12:55.403373    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:12:55.413938    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:12:55.414027    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:12:55.425670    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:12:55.425740    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:12:55.436384    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:12:55.436458    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:12:55.446677    5456 logs.go:276] 0 containers: []
	W0910 11:12:55.446689    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:12:55.446747    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:12:55.457558    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:12:55.457575    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:12:55.457582    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:12:55.496158    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:12:55.496172    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:12:55.508689    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:12:55.508703    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:12:55.513216    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:12:55.513224    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:12:55.526892    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:12:55.526905    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:12:55.539119    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:12:55.539134    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:12:55.551945    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:12:55.551958    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:12:55.589720    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:12:55.589731    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:12:55.604784    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:12:55.604795    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:12:55.629476    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:12:55.629490    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:12:55.641454    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:12:55.641466    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:12:55.679842    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:12:55.679854    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:12:55.694117    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:12:55.694127    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:12:55.708532    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:12:55.708544    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:12:55.719981    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:12:55.720001    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:12:55.737711    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:12:55.737720    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:12:55.752800    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:12:55.752815    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:12:58.266307    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:03.267602    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:03.267888    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:03.297771    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:03.297898    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:03.315387    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:03.315486    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:03.330376    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:03.330452    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:03.341718    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:03.341784    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:03.351645    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:03.351722    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:03.364447    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:03.364532    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:03.374724    5456 logs.go:276] 0 containers: []
	W0910 11:13:03.374735    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:03.374796    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:03.384885    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:03.384902    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:03.384919    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:03.400173    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:03.400184    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:03.411591    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:03.411602    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:03.423003    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:03.423013    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:03.433906    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:03.433916    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:03.445169    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:03.445181    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:03.456834    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:03.456845    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:03.485257    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:03.485267    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:03.502702    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:03.502714    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:03.539133    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:03.539145    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:03.553291    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:03.553302    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:03.590343    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:03.590355    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:03.604854    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:03.604864    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:03.630122    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:03.630131    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:03.642836    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:03.642848    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:03.681396    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:03.681406    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:03.685705    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:03.685714    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:06.202265    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:11.204562    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:11.204710    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:11.217952    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:11.218036    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:11.229103    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:11.229182    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:11.240483    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:11.240556    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:11.251926    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:11.252004    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:11.262574    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:11.262644    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:11.273040    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:11.273112    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:11.285906    5456 logs.go:276] 0 containers: []
	W0910 11:13:11.285917    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:11.285977    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:11.301424    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:11.301440    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:11.301446    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:11.338015    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:11.338027    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:11.351904    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:11.351915    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:11.384522    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:11.384532    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:11.403270    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:11.403280    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:11.407894    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:11.407903    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:11.419431    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:11.419442    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:11.431736    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:11.431748    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:11.471682    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:11.471696    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:11.510101    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:11.510112    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:11.522356    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:11.522368    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:11.534212    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:11.534225    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:11.553201    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:11.553215    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:11.564597    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:11.564615    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:11.583436    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:11.583448    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:11.595681    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:11.595696    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:11.606973    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:11.606984    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:14.135914    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:19.136964    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:19.137185    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:19.155457    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:19.155530    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:19.168841    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:19.168914    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:19.180523    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:19.180593    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:19.191374    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:19.191443    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:19.202151    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:19.202210    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:19.212559    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:19.212623    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:19.222687    5456 logs.go:276] 0 containers: []
	W0910 11:13:19.222699    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:19.222753    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:19.233211    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:19.233229    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:19.233235    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:19.247615    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:19.247626    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:19.259138    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:19.259153    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:19.282961    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:19.282969    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:19.300221    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:19.300231    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:19.311929    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:19.311944    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:19.324243    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:19.324254    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:19.328406    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:19.328413    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:19.340154    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:19.340164    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:19.351488    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:19.351501    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:19.365070    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:19.365081    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:19.378947    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:19.378957    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:19.390272    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:19.390282    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:19.402341    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:19.402352    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:19.415456    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:19.415467    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:19.452727    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:19.452743    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:19.493007    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:19.493020    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:22.037307    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:27.039695    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:27.039820    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:27.053188    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:27.053269    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:27.066942    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:27.067018    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:27.077723    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:27.077797    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:27.088323    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:27.088398    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:27.099929    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:27.099999    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:27.110583    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:27.110657    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:27.121238    5456 logs.go:276] 0 containers: []
	W0910 11:13:27.121251    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:27.121310    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:27.132049    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:27.132068    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:27.132074    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:27.166873    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:27.166885    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:27.183938    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:27.183949    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:27.195719    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:27.195733    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:27.200305    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:27.200312    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:27.238694    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:27.238705    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:27.252896    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:27.252907    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:27.264677    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:27.264687    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:27.276215    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:27.276225    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:27.287983    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:27.287994    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:27.299617    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:27.299629    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:27.314209    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:27.314220    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:27.325617    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:27.325631    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:27.343302    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:27.343312    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:27.355134    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:27.355146    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:27.393428    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:27.393437    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:27.405720    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:27.405731    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:29.929639    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:34.931837    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:34.932074    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:34.963164    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:34.963274    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:34.981029    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:34.981123    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:34.995042    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:34.995121    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:35.007011    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:35.007080    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:35.022377    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:35.022468    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:35.033552    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:35.033626    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:35.043794    5456 logs.go:276] 0 containers: []
	W0910 11:13:35.043804    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:35.043865    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:35.054256    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:35.054272    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:35.054278    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:35.071582    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:35.071593    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:35.090360    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:35.090371    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:35.115470    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:35.115483    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:35.127640    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:35.127650    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:35.131846    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:35.131852    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:35.146036    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:35.146049    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:35.161233    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:35.161246    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:35.173223    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:35.173237    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:35.184947    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:35.184957    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:35.221471    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:35.221483    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:35.235851    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:35.235864    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:35.249511    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:35.249522    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:35.260442    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:35.260455    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:35.272004    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:35.272015    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:35.284035    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:35.284048    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:35.318366    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:35.318378    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:37.859642    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:42.862045    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:42.862199    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:42.876541    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:42.876628    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:42.888234    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:42.888300    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:42.898728    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:42.898809    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:42.909165    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:42.909235    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:42.919507    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:42.919579    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:42.930226    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:42.930298    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:42.943829    5456 logs.go:276] 0 containers: []
	W0910 11:13:42.943842    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:42.943903    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:42.954678    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:42.954697    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:42.954701    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:42.968539    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:42.968548    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:42.980152    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:42.980163    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:42.992468    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:42.992481    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:42.996817    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:42.996826    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:43.031196    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:43.031209    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:43.078139    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:43.078152    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:43.094791    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:43.094804    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:43.110302    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:43.110316    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:43.148726    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:43.148738    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:43.165959    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:43.165970    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:43.179849    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:43.179860    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:43.204742    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:43.204750    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:43.216791    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:43.216804    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:43.228209    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:43.228220    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:43.239602    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:43.239614    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:43.257190    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:43.257200    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:45.772270    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:50.774046    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:50.774217    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:50.786214    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:50.786294    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:50.796897    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:50.796970    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:50.807821    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:50.807891    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:50.818943    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:50.819015    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:50.829244    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:50.829314    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:50.845098    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:50.845172    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:50.855652    5456 logs.go:276] 0 containers: []
	W0910 11:13:50.855668    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:50.855729    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:50.866630    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:50.866650    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:50.866656    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:50.903218    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:50.903230    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:50.921723    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:50.921735    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:50.933525    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:50.933539    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:50.950686    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:50.950696    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:50.962659    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:50.962671    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:50.967427    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:50.967433    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:51.012437    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:51.012448    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:51.036674    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:51.036683    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:13:51.051245    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:51.051257    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:51.065237    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:51.065247    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:51.080392    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:51.080403    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:51.092043    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:51.092054    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:51.105388    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:51.105399    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:51.144318    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:51.144330    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:51.156682    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:51.156695    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:51.168393    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:51.168403    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:53.682813    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:13:58.684993    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:13:58.685199    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:13:58.702547    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:13:58.702645    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:13:58.717727    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:13:58.717801    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:13:58.729414    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:13:58.729481    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:13:58.741520    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:13:58.741593    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:13:58.751426    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:13:58.751491    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:13:58.761684    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:13:58.761752    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:13:58.784889    5456 logs.go:276] 0 containers: []
	W0910 11:13:58.784901    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:13:58.784964    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:13:58.795239    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:13:58.795258    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:13:58.795265    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:13:58.807054    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:13:58.807066    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:13:58.819644    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:13:58.819656    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:13:58.831711    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:13:58.831725    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:13:58.849573    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:13:58.849582    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:13:58.860630    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:13:58.860642    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:13:58.886525    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:13:58.886535    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:13:58.944383    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:13:58.944397    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:13:58.958789    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:13:58.958800    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:13:58.970733    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:13:58.970746    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:13:58.982190    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:13:58.982202    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:13:59.020129    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:13:59.020140    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:13:59.033473    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:13:59.033484    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:13:59.044804    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:13:59.044816    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:13:59.057471    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:13:59.057483    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:13:59.061814    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:13:59.061823    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:13:59.097098    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:13:59.097111    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:01.612568    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:06.614764    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:06.615003    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:06.640996    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:06.641118    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:06.660443    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:06.660530    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:06.673049    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:06.673124    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:06.684045    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:06.684120    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:06.700916    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:06.700979    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:06.712170    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:06.712235    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:06.722336    5456 logs.go:276] 0 containers: []
	W0910 11:14:06.722347    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:06.722406    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:06.732841    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:06.732858    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:06.732863    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:06.736994    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:06.737002    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:06.750679    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:06.750689    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:06.767893    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:06.767906    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:06.781698    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:06.781709    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:06.803435    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:06.803446    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:06.814740    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:06.814751    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:06.825987    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:06.825999    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:06.837546    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:06.837558    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:06.850045    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:06.850054    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:06.860926    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:06.860940    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:06.879096    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:06.879106    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:06.896639    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:06.896652    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:06.908482    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:06.908496    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:06.946584    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:06.946596    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:06.987009    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:06.987019    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:07.024685    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:07.024696    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:09.549083    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:14.551208    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:14.551426    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:14.567469    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:14.567551    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:14.580329    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:14.580406    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:14.591818    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:14.591888    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:14.608906    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:14.608975    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:14.619568    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:14.619644    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:14.632994    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:14.633065    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:14.643262    5456 logs.go:276] 0 containers: []
	W0910 11:14:14.643277    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:14.643338    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:14.653711    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:14.653728    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:14.653734    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:14.658162    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:14.658172    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:14.694476    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:14.694487    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:14.734930    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:14.734941    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:14.751262    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:14.751275    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:14.767907    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:14.767922    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:14.805437    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:14.805451    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:14.816669    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:14.816679    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:14.829066    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:14.829076    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:14.841003    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:14.841012    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:14.852309    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:14.852320    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:14.863758    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:14.863768    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:14.888039    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:14.888048    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:14.904107    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:14.904117    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:14.918363    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:14.918373    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:14.929897    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:14.929906    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:14.947430    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:14.947442    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:17.461321    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:22.463516    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:22.463949    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:22.504874    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:22.505010    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:22.523266    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:22.523365    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:22.537698    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:22.537778    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:22.552401    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:22.552473    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:22.562361    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:22.562434    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:22.573000    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:22.573069    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:22.583459    5456 logs.go:276] 0 containers: []
	W0910 11:14:22.583477    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:22.583542    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:22.600665    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:22.600682    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:22.600687    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:22.611758    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:22.611770    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:22.629118    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:22.629130    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:22.642520    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:22.642533    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:22.654299    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:22.654310    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:22.666689    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:22.666700    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:22.671355    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:22.671361    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:22.682824    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:22.682838    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:22.697522    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:22.697531    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:22.711057    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:22.711067    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:22.750261    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:22.750277    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:22.790974    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:22.790988    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:22.809036    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:22.809047    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:22.821335    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:22.821346    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:22.833439    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:22.833451    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:22.844253    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:22.844264    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:22.866899    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:22.866907    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:25.403837    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:30.405347    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:30.405701    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:30.435330    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:30.435465    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:30.455860    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:30.455939    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:30.469662    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:30.469736    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:30.480880    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:30.480950    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:30.492367    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:30.492432    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:30.509595    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:30.509666    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:30.519661    5456 logs.go:276] 0 containers: []
	W0910 11:14:30.519679    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:30.519741    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:30.530132    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:30.530148    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:30.530154    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:30.567356    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:30.567369    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:30.578997    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:30.579010    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:30.595239    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:30.595251    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:30.618108    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:30.618118    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:30.652159    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:30.652172    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:30.667335    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:30.667347    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:30.679172    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:30.679185    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:30.696713    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:30.696723    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:30.709886    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:30.709894    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:30.714191    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:30.714198    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:30.752720    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:30.752733    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:30.764102    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:30.764112    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:30.780672    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:30.780685    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:30.794939    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:30.794951    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:30.806735    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:30.806746    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:30.818561    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:30.818573    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:33.332750    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:38.335270    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:38.335779    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:38.375147    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:38.375285    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:38.396055    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:38.396156    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:38.410803    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:38.410875    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:38.422934    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:38.423003    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:38.433949    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:38.434022    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:38.445354    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:38.445421    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:38.456150    5456 logs.go:276] 0 containers: []
	W0910 11:14:38.456161    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:38.456218    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:38.467025    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:38.467045    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:38.467050    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:38.481508    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:38.481521    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:38.494394    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:38.494405    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:38.506460    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:38.506472    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:38.548301    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:38.548315    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:38.560338    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:38.560348    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:38.577319    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:38.577331    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:38.591049    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:38.591059    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:38.627743    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:38.627755    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:38.639095    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:38.639107    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:38.650923    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:38.650936    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:38.670799    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:38.670811    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:38.682083    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:38.682094    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:38.705024    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:38.705032    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:38.741196    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:38.741205    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:38.755306    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:38.755315    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:38.770941    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:38.770951    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:41.279094    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:46.281246    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:46.281438    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:46.304185    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:46.304283    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:46.319244    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:46.319345    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:46.331670    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:46.331736    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:46.342719    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:46.342792    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:46.352839    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:46.352906    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:46.367274    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:46.367350    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:46.380184    5456 logs.go:276] 0 containers: []
	W0910 11:14:46.380195    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:46.380248    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:46.390763    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:46.390785    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:46.390791    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:46.402386    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:46.402396    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:46.426676    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:46.426686    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:46.438015    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:46.438027    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:46.462267    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:46.462278    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:46.474088    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:46.474099    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:46.513205    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:46.513218    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:46.552087    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:46.552098    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:46.565129    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:46.565143    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:46.569691    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:46.569698    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:46.594539    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:46.594549    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:46.611743    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:46.611752    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:46.625775    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:46.625788    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:46.637037    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:46.637050    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:46.648116    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:46.648128    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:46.660420    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:46.660435    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:46.694514    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:46.694528    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:49.208773    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:14:54.211005    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:14:54.211271    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:14:54.231519    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:14:54.231613    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:14:54.248436    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:14:54.248513    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:14:54.260497    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:14:54.260566    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:14:54.270940    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:14:54.271013    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:14:54.281257    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:14:54.281327    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:14:54.291684    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:14:54.291753    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:14:54.301849    5456 logs.go:276] 0 containers: []
	W0910 11:14:54.301861    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:14:54.301921    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:14:54.312024    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:14:54.312042    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:14:54.312047    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:14:54.325082    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:14:54.325092    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:14:54.343369    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:14:54.343379    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:14:54.355172    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:14:54.355182    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:14:54.366693    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:14:54.366706    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:14:54.403117    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:14:54.403128    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:14:54.416371    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:14:54.416386    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:14:54.452346    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:14:54.452356    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:14:54.470347    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:14:54.470359    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:14:54.482543    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:14:54.482554    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:14:54.495556    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:14:54.495566    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:14:54.512448    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:14:54.512459    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:14:54.526252    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:14:54.526265    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:14:54.537480    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:14:54.537492    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:14:54.561884    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:14:54.561894    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:14:54.566573    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:14:54.566582    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:14:54.581080    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:14:54.581095    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:14:57.129192    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:02.130439    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:02.130585    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:02.142274    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:02.142349    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:02.153013    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:02.153086    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:02.163762    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:02.163825    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:02.174485    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:02.174555    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:02.185085    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:02.185147    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:02.198831    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:02.198901    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:02.213527    5456 logs.go:276] 0 containers: []
	W0910 11:15:02.213541    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:02.213605    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:02.224535    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:02.224553    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:02.224559    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:02.235534    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:02.235545    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:02.256840    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:02.256850    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:02.280834    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:02.280841    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:02.322911    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:02.322921    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:02.336857    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:02.336867    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:02.370781    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:02.370793    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:02.384802    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:02.384812    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:02.404511    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:02.404521    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:02.416427    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:02.416437    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:02.430203    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:02.430213    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:02.441431    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:02.441442    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:02.479058    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:02.479072    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:02.483285    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:02.483291    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:02.495761    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:02.495772    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:02.507917    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:02.507928    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:02.520330    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:02.520343    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:05.033954    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:10.034308    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:10.034555    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:10.061301    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:10.061388    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:10.083168    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:10.083245    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:10.097471    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:10.097544    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:10.108808    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:10.108879    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:10.123598    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:10.123665    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:10.134791    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:10.134860    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:10.144697    5456 logs.go:276] 0 containers: []
	W0910 11:15:10.144710    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:10.144762    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:10.155735    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:10.155755    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:10.155761    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:10.166988    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:10.167000    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:10.181614    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:10.181624    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:10.193423    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:10.193434    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:10.206683    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:10.206694    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:10.218611    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:10.218622    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:10.256588    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:10.256597    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:10.268647    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:10.268660    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:10.280439    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:10.280451    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:10.293001    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:10.293011    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:10.307579    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:10.307590    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:10.342064    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:10.342076    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:10.356353    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:10.356363    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:10.396137    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:10.396153    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:10.407787    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:10.407801    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:10.424854    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:10.424864    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:10.447355    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:10.447363    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:12.953403    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:17.955783    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:17.955984    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:17.972084    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:17.972174    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:17.984137    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:17.984203    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:17.999530    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:17.999604    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:18.009681    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:18.009756    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:18.020293    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:18.020354    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:18.030832    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:18.030904    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:18.041010    5456 logs.go:276] 0 containers: []
	W0910 11:15:18.041021    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:18.041087    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:18.051601    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:18.051619    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:18.051625    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:18.090025    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:18.090042    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:18.106260    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:18.106271    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:18.118360    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:18.118372    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:18.136268    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:18.136278    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:18.147532    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:18.147541    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:18.158871    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:18.158881    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:18.163184    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:18.163194    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:18.197833    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:18.197844    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:18.212398    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:18.212409    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:18.227111    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:18.227122    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:18.265291    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:18.265302    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:18.277692    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:18.277704    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:18.290463    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:18.290478    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:18.301923    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:18.301933    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:18.325352    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:18.325360    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:18.337608    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:18.337619    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:20.853964    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:25.856298    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:25.856530    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:25.874701    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:25.874802    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:25.892835    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:25.892909    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:25.904428    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:25.904502    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:25.919181    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:25.919251    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:25.929746    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:25.929814    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:25.941214    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:25.941285    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:25.952133    5456 logs.go:276] 0 containers: []
	W0910 11:15:25.952145    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:25.952207    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:25.962683    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:25.962700    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:25.962706    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:25.974246    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:25.974257    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:25.985908    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:25.985919    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:26.003428    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:26.003438    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:26.015174    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:26.015185    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:26.025907    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:26.025919    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:26.030050    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:26.030059    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:26.066004    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:26.066015    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:26.079933    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:26.079943    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:26.094786    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:26.094797    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:26.118152    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:26.118162    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:26.135626    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:26.135638    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:26.149647    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:26.149657    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:26.160998    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:26.161014    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:26.172283    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:26.172294    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:26.209632    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:26.209643    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:26.246854    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:26.246866    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:28.760426    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:33.762690    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:33.762965    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:33.784810    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:33.784914    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:33.800287    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:33.800371    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:33.812760    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:33.812844    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:33.823704    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:33.823773    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:33.834200    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:33.834269    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:33.844365    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:33.844435    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:33.854234    5456 logs.go:276] 0 containers: []
	W0910 11:15:33.854246    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:33.854300    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:33.864741    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:33.864759    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:33.864764    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:33.902283    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:33.902295    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:33.936379    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:33.936393    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:33.974509    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:33.974522    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:33.988634    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:33.988646    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:34.003075    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:34.003092    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:34.015063    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:34.015075    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:34.026736    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:34.026749    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:34.038180    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:34.038190    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:34.054882    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:34.054892    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:34.076751    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:34.076761    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:34.088880    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:34.088892    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:34.092988    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:34.092995    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:34.110131    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:34.110142    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:34.122960    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:34.122971    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:34.137061    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:34.137070    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:34.149080    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:34.149090    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:36.660978    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:41.663272    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:41.663657    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:15:41.700583    5456 logs.go:276] 2 containers: [5adcd52e4474 0871f0cf5a37]
	I0910 11:15:41.700725    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:15:41.720380    5456 logs.go:276] 2 containers: [8327080ac8b7 8d2c0af3a670]
	I0910 11:15:41.720464    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:15:41.734987    5456 logs.go:276] 1 containers: [d9bc409d4f1b]
	I0910 11:15:41.735064    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:15:41.747872    5456 logs.go:276] 2 containers: [d19acbf1cfca 29be8057a1dd]
	I0910 11:15:41.747957    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:15:41.758952    5456 logs.go:276] 1 containers: [9f9c9391391d]
	I0910 11:15:41.759015    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:15:41.769535    5456 logs.go:276] 2 containers: [f9d24b13da34 8db99da6a98d]
	I0910 11:15:41.769596    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:15:41.779925    5456 logs.go:276] 0 containers: []
	W0910 11:15:41.779937    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:15:41.779998    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:15:41.790356    5456 logs.go:276] 2 containers: [7dc30888803e 5484ce958d25]
	I0910 11:15:41.790372    5456 logs.go:123] Gathering logs for storage-provisioner [5484ce958d25] ...
	I0910 11:15:41.790377    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5484ce958d25"
	I0910 11:15:41.801250    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:15:41.801260    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:15:41.813876    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:15:41.813887    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:15:41.818006    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:15:41.818013    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:15:41.852726    5456 logs.go:123] Gathering logs for coredns [d9bc409d4f1b] ...
	I0910 11:15:41.852739    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9bc409d4f1b"
	I0910 11:15:41.872871    5456 logs.go:123] Gathering logs for kube-scheduler [d19acbf1cfca] ...
	I0910 11:15:41.872882    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d19acbf1cfca"
	I0910 11:15:41.888719    5456 logs.go:123] Gathering logs for storage-provisioner [7dc30888803e] ...
	I0910 11:15:41.888731    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc30888803e"
	I0910 11:15:41.900756    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:15:41.900768    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:15:41.923039    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:15:41.923049    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:15:41.959109    5456 logs.go:123] Gathering logs for etcd [8327080ac8b7] ...
	I0910 11:15:41.959121    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8327080ac8b7"
	I0910 11:15:41.972832    5456 logs.go:123] Gathering logs for kube-controller-manager [f9d24b13da34] ...
	I0910 11:15:41.972847    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d24b13da34"
	I0910 11:15:41.991000    5456 logs.go:123] Gathering logs for kube-controller-manager [8db99da6a98d] ...
	I0910 11:15:41.991010    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db99da6a98d"
	I0910 11:15:42.006817    5456 logs.go:123] Gathering logs for kube-apiserver [5adcd52e4474] ...
	I0910 11:15:42.006828    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5adcd52e4474"
	I0910 11:15:42.020526    5456 logs.go:123] Gathering logs for kube-apiserver [0871f0cf5a37] ...
	I0910 11:15:42.020536    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0871f0cf5a37"
	I0910 11:15:42.065774    5456 logs.go:123] Gathering logs for etcd [8d2c0af3a670] ...
	I0910 11:15:42.065788    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d2c0af3a670"
	I0910 11:15:42.080944    5456 logs.go:123] Gathering logs for kube-scheduler [29be8057a1dd] ...
	I0910 11:15:42.080957    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29be8057a1dd"
	I0910 11:15:42.093207    5456 logs.go:123] Gathering logs for kube-proxy [9f9c9391391d] ...
	I0910 11:15:42.093217    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9c9391391d"
	I0910 11:15:44.606692    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:49.608825    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:15:49.608921    5456 kubeadm.go:597] duration metric: took 4m4.180505459s to restartPrimaryControlPlane
	W0910 11:15:49.609001    5456 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 11:15:49.609035    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0910 11:15:50.631306    5456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.022280666s)
	I0910 11:15:50.631642    5456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 11:15:50.636764    5456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 11:15:50.639784    5456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 11:15:50.642410    5456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 11:15:50.642416    5456 kubeadm.go:157] found existing configuration files:
	
	I0910 11:15:50.642439    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0910 11:15:50.644787    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 11:15:50.644807    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 11:15:50.647806    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0910 11:15:50.650867    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 11:15:50.650895    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 11:15:50.653320    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0910 11:15:50.656230    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 11:15:50.656249    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 11:15:50.659446    5456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0910 11:15:50.661969    5456 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 11:15:50.661990    5456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 11:15:50.664541    5456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 11:15:50.729147    5456 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 11:15:57.293364    5456 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0910 11:15:57.293391    5456 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 11:15:57.293426    5456 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 11:15:57.293472    5456 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 11:15:57.293577    5456 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 11:15:57.293674    5456 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 11:15:57.297550    5456 out.go:235]   - Generating certificates and keys ...
	I0910 11:15:57.297587    5456 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 11:15:57.297623    5456 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 11:15:57.297668    5456 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 11:15:57.297711    5456 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 11:15:57.297755    5456 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 11:15:57.297790    5456 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 11:15:57.297827    5456 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 11:15:57.297862    5456 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 11:15:57.297903    5456 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 11:15:57.297942    5456 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 11:15:57.297963    5456 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 11:15:57.297993    5456 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 11:15:57.298023    5456 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 11:15:57.298052    5456 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 11:15:57.298087    5456 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 11:15:57.298121    5456 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 11:15:57.298191    5456 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 11:15:57.298229    5456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 11:15:57.298255    5456 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 11:15:57.298287    5456 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 11:15:57.307692    5456 out.go:235]   - Booting up control plane ...
	I0910 11:15:57.307726    5456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 11:15:57.307757    5456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 11:15:57.307789    5456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 11:15:57.307827    5456 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 11:15:57.307896    5456 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 11:15:57.307930    5456 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501994 seconds
	I0910 11:15:57.307978    5456 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 11:15:57.308035    5456 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 11:15:57.308061    5456 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 11:15:57.308145    5456 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-163000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 11:15:57.308172    5456 kubeadm.go:310] [bootstrap-token] Using token: xg4vz0.n7rwz82vznccqe8o
	I0910 11:15:57.311784    5456 out.go:235]   - Configuring RBAC rules ...
	I0910 11:15:57.311838    5456 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 11:15:57.311890    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 11:15:57.311963    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 11:15:57.312035    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 11:15:57.312095    5456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 11:15:57.312141    5456 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 11:15:57.312203    5456 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 11:15:57.312227    5456 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 11:15:57.312249    5456 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 11:15:57.312251    5456 kubeadm.go:310] 
	I0910 11:15:57.312300    5456 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 11:15:57.312305    5456 kubeadm.go:310] 
	I0910 11:15:57.312345    5456 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 11:15:57.312351    5456 kubeadm.go:310] 
	I0910 11:15:57.312366    5456 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 11:15:57.312393    5456 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 11:15:57.312418    5456 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 11:15:57.312421    5456 kubeadm.go:310] 
	I0910 11:15:57.312452    5456 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 11:15:57.312456    5456 kubeadm.go:310] 
	I0910 11:15:57.312480    5456 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 11:15:57.312483    5456 kubeadm.go:310] 
	I0910 11:15:57.312510    5456 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 11:15:57.312548    5456 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 11:15:57.312592    5456 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 11:15:57.312596    5456 kubeadm.go:310] 
	I0910 11:15:57.312643    5456 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 11:15:57.312681    5456 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 11:15:57.312684    5456 kubeadm.go:310] 
	I0910 11:15:57.312729    5456 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xg4vz0.n7rwz82vznccqe8o \
	I0910 11:15:57.312792    5456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 \
	I0910 11:15:57.312804    5456 kubeadm.go:310] 	--control-plane 
	I0910 11:15:57.312808    5456 kubeadm.go:310] 
	I0910 11:15:57.312856    5456 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 11:15:57.312860    5456 kubeadm.go:310] 
	I0910 11:15:57.312903    5456 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xg4vz0.n7rwz82vznccqe8o \
	I0910 11:15:57.312961    5456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe03b769f4337d7c0adc05ef52c00fad5eef028fab37b5c6cf35794f6ca4bdd0 
	I0910 11:15:57.312967    5456 cni.go:84] Creating CNI manager for ""
	I0910 11:15:57.312974    5456 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:15:57.323750    5456 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 11:15:57.327573    5456 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 11:15:57.332307    5456 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 11:15:57.337681    5456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 11:15:57.337732    5456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 11:15:57.337733    5456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-163000 minikube.k8s.io/updated_at=2024_09_10T11_15_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=stopped-upgrade-163000 minikube.k8s.io/primary=true
	I0910 11:15:57.341487    5456 ops.go:34] apiserver oom_adj: -16
	I0910 11:15:57.372446    5456 kubeadm.go:1113] duration metric: took 34.756583ms to wait for elevateKubeSystemPrivileges
	I0910 11:15:57.382046    5456 kubeadm.go:394] duration metric: took 4m11.967498333s to StartCluster
	I0910 11:15:57.382064    5456 settings.go:142] acquiring lock: {Name:mkc4479acb7c6185024679cd35acf0055f682c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:15:57.382149    5456 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:15:57.382560    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/kubeconfig: {Name:mk1f6cc8b92900503b90f69186dd5a0cadd3a95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:15:57.382809    5456 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:15:57.382815    5456 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 11:15:57.382862    5456 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-163000"
	I0910 11:15:57.382884    5456 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-163000"
	W0910 11:15:57.382890    5456 addons.go:243] addon storage-provisioner should already be in state true
	I0910 11:15:57.382892    5456 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-163000"
	I0910 11:15:57.382904    5456 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0910 11:15:57.382908    5456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-163000"
	I0910 11:15:57.382921    5456 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:15:57.383856    5456 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19598-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10692e200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 11:15:57.383983    5456 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-163000"
	W0910 11:15:57.383988    5456 addons.go:243] addon default-storageclass should already be in state true
	I0910 11:15:57.383995    5456 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0910 11:15:57.385673    5456 out.go:177] * Verifying Kubernetes components...
	I0910 11:15:57.386058    5456 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 11:15:57.389864    5456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 11:15:57.389871    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:15:57.393726    5456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 11:15:57.397684    5456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 11:15:57.401748    5456 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:15:57.401754    5456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 11:15:57.401760    5456 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0910 11:15:57.468983    5456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 11:15:57.474411    5456 api_server.go:52] waiting for apiserver process to appear ...
	I0910 11:15:57.474458    5456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 11:15:57.478592    5456 api_server.go:72] duration metric: took 95.775583ms to wait for apiserver process to appear ...
	I0910 11:15:57.478602    5456 api_server.go:88] waiting for apiserver healthz status ...
	I0910 11:15:57.478609    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:15:57.484779    5456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 11:15:57.506254    5456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 11:15:57.845897    5456 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 11:15:57.845910    5456 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 11:16:02.479966    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:02.480010    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:07.480214    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:07.480243    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:12.480447    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:12.480468    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:17.480604    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:17.480644    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:22.481355    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:22.481381    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:27.481848    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:27.481872    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0910 11:16:27.847599    5456 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0910 11:16:27.851001    5456 out.go:177] * Enabled addons: storage-provisioner
	I0910 11:16:27.858791    5456 addons.go:510] duration metric: took 30.476781625s for enable addons: enabled=[storage-provisioner]
	I0910 11:16:32.482496    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:32.482533    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:37.483564    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:37.483595    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:42.484903    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:42.484959    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:47.486486    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:47.486575    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:52.488584    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:52.488649    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:16:57.490905    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:16:57.491010    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:16:57.505323    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:16:57.505402    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:16:57.517907    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:16:57.517985    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:16:57.532215    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:16:57.532284    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:16:57.543427    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:16:57.543501    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:16:57.555232    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:16:57.555314    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:16:57.568551    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:16:57.568623    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:16:57.578812    5456 logs.go:276] 0 containers: []
	W0910 11:16:57.578826    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:16:57.578889    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:16:57.589604    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:16:57.589619    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:16:57.589625    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:16:57.605214    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:16:57.605226    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:16:57.621250    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:16:57.621263    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:16:57.633109    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:16:57.633122    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:16:57.656730    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:16:57.656739    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:16:57.668966    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:16:57.668978    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:16:57.707405    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:16:57.707417    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:16:57.719102    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:16:57.719113    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:16:57.733868    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:16:57.733883    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:16:57.748916    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:16:57.748928    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:16:57.761653    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:16:57.761664    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:16:57.779500    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:16:57.779510    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:16:57.814259    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:16:57.814269    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:00.320606    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:17:05.323093    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:17:05.323237    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:17:05.337147    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:17:05.337225    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:17:05.348744    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:17:05.348803    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:17:05.363411    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:17:05.363479    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:17:05.373710    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:17:05.373777    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:17:05.384805    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:17:05.384871    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:17:05.395374    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:17:05.395433    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:17:05.406033    5456 logs.go:276] 0 containers: []
	W0910 11:17:05.406045    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:17:05.406100    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:17:05.416290    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:17:05.416306    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:17:05.416312    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:17:05.433923    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:17:05.433935    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:17:05.445424    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:17:05.445436    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:17:05.481881    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:17:05.481892    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:17:05.496132    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:17:05.496145    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:17:05.510083    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:17:05.510094    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:17:05.522031    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:17:05.522042    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:17:05.533654    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:17:05.533667    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:17:05.550776    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:17:05.550789    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:17:05.575540    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:17:05.575550    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:17:05.608951    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:17:05.608966    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:05.614510    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:17:05.614520    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:17:05.626049    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:17:05.626060    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:17:08.139405    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:17:13.141595    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:17:13.141705    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:17:13.153080    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:17:13.153156    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:17:13.163581    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:17:13.163655    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:17:13.177493    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:17:13.177560    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:17:13.188338    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:17:13.188400    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:17:13.198796    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:17:13.198867    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:17:13.209317    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:17:13.209378    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:17:13.219095    5456 logs.go:276] 0 containers: []
	W0910 11:17:13.219112    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:17:13.219171    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:17:13.229112    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:17:13.229126    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:17:13.229133    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:17:13.243008    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:17:13.243021    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:17:13.254679    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:17:13.254688    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:17:13.270356    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:17:13.270364    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:17:13.288604    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:17:13.288614    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:17:13.300470    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:17:13.300479    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:17:13.312576    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:17:13.312586    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:17:13.349444    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:17:13.349451    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:17:13.383836    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:17:13.383848    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:17:13.395315    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:17:13.395325    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:17:13.413559    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:17:13.413569    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:17:13.438636    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:17:13.438643    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:13.442619    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:17:13.442626    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:17:15.959083    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:17:20.961385    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:17:20.962261    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:17:21.002741    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:17:21.002871    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:17:21.024481    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:17:21.024565    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:17:21.039524    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:17:21.039598    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:17:21.053791    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:17:21.053857    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:17:21.065393    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:17:21.065466    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:17:21.076090    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:17:21.076157    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:17:21.090620    5456 logs.go:276] 0 containers: []
	W0910 11:17:21.090630    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:17:21.090683    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:17:21.101108    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:17:21.101124    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:17:21.101129    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:17:21.115488    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:17:21.115496    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:17:21.133303    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:17:21.133313    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:17:21.144625    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:17:21.144636    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:21.149391    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:17:21.149397    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:17:21.163785    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:17:21.163799    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:17:21.180552    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:17:21.180565    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:17:21.192028    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:17:21.192041    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:17:21.203623    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:17:21.203636    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:17:21.228090    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:17:21.228098    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:17:21.239700    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:17:21.239713    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:17:21.275092    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:17:21.275103    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:17:21.311517    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:17:21.311529    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:17:23.827745    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:17:28.828985    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:17:28.829041    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:17:28.839659    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:17:28.839716    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:17:28.852014    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:17:28.852062    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:17:28.862243    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:17:28.862303    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:17:28.873712    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:17:28.873759    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:17:28.884172    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:17:28.884217    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:17:28.895237    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:17:28.895291    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:17:28.905764    5456 logs.go:276] 0 containers: []
	W0910 11:17:28.905773    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:17:28.905811    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:17:28.917935    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:17:28.917952    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:17:28.917958    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:17:28.931381    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:17:28.931393    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:17:28.943960    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:17:28.943973    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:17:28.962791    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:17:28.962807    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:17:28.975477    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:17:28.975488    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:28.980514    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:17:28.980526    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:17:29.020494    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:17:29.020509    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:17:29.042744    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:17:29.042763    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:17:29.057792    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:17:29.057804    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:17:29.083627    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:17:29.083638    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:17:29.095852    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:17:29.095863    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:17:29.132016    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:17:29.132027    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:17:29.147206    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:17:29.147218    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:17:31.662955    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:17:36.665612    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:17:36.665745    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:17:36.688116    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:17:36.688192    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:17:36.701587    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:17:36.701653    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:17:36.712679    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:17:36.712740    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:17:36.723378    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:17:36.723446    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:17:36.737770    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:17:36.737838    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:17:36.748353    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:17:36.748440    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:17:36.758681    5456 logs.go:276] 0 containers: []
	W0910 11:17:36.758695    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:17:36.758754    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:17:36.768914    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:17:36.768929    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:17:36.768934    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:17:36.783583    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:17:36.783595    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:17:36.802766    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:17:36.802777    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:17:36.826326    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:17:36.826334    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:17:36.837390    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:17:36.837401    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:36.841848    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:17:36.841857    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:17:36.876330    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:17:36.876343    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:17:36.888080    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:17:36.888093    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:17:36.899494    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:17:36.899507    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:17:36.915005    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:17:36.915018    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:17:36.927312    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:17:36.927322    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:17:36.960201    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:17:36.960211    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:17:36.974493    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:17:36.974504    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:17:39.490240    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:17:44.492859    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:17:44.493255    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:17:44.534390    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:17:44.534507    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:17:44.556118    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:17:44.556208    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:17:44.572523    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:17:44.572597    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:17:44.585879    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:17:44.585948    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:17:44.597365    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:17:44.597444    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:17:44.608267    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:17:44.608332    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:17:44.619333    5456 logs.go:276] 0 containers: []
	W0910 11:17:44.619345    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:17:44.619406    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:17:44.630297    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:17:44.630314    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:17:44.630322    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:17:44.644541    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:17:44.644554    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:17:44.656480    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:17:44.656494    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:17:44.668202    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:17:44.668215    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:17:44.690385    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:17:44.690395    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:17:44.702162    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:17:44.702170    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:17:44.717456    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:17:44.717466    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:44.721858    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:17:44.721864    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:17:44.759641    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:17:44.759652    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:17:44.775138    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:17:44.775149    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:17:44.787413    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:17:44.787423    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:17:44.811086    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:17:44.811106    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:17:44.823389    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:17:44.823402    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:17:47.357787    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:17:52.360135    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:17:52.360515    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:17:52.396357    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:17:52.396467    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:17:52.415895    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:17:52.415979    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:17:52.429078    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:17:52.429148    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:17:52.440335    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:17:52.440402    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:17:52.451590    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:17:52.451652    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:17:52.462418    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:17:52.462484    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:17:52.472922    5456 logs.go:276] 0 containers: []
	W0910 11:17:52.472935    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:17:52.472995    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:17:52.484591    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:17:52.484609    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:17:52.484613    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:17:52.499629    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:17:52.499641    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:17:52.513479    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:17:52.513493    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:17:52.528863    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:17:52.528876    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:17:52.543076    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:17:52.543088    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:17:52.555460    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:17:52.555469    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:17:52.567175    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:17:52.567187    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:17:52.580482    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:17:52.580494    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:17:52.598342    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:17:52.598355    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:17:52.621490    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:17:52.621497    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:17:52.633111    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:17:52.633124    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:17:52.666053    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:17:52.666061    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:17:52.669933    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:17:52.669940    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:17:55.215016    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:00.217238    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:00.217427    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:00.241281    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:00.241384    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:00.257069    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:00.257147    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:00.270811    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:18:00.270891    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:00.285917    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:00.285987    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:00.296708    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:00.296782    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:00.307837    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:00.307907    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:00.319529    5456 logs.go:276] 0 containers: []
	W0910 11:18:00.319539    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:00.319596    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:00.330528    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:00.330542    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:00.330548    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:00.366073    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:00.366084    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:00.382171    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:00.382183    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:00.394400    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:00.394414    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:00.412875    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:00.412887    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:00.425164    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:00.425177    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:00.437309    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:00.437318    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:00.449395    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:00.449408    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:00.473870    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:00.473878    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:00.507895    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:00.507908    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:00.512945    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:00.512955    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:00.528308    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:00.528319    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:00.547335    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:00.547345    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:03.061765    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:08.064301    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:08.064696    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:08.104173    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:08.104284    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:08.123530    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:08.123619    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:08.140009    5456 logs.go:276] 2 containers: [1052fb80b9f5 846243f826bf]
	I0910 11:18:08.140077    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:08.155315    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:08.155374    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:08.166723    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:08.166797    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:08.177578    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:08.177635    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:08.188440    5456 logs.go:276] 0 containers: []
	W0910 11:18:08.188453    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:08.188500    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:08.199567    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:08.199584    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:08.199589    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:08.217699    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:08.217709    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:08.242384    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:08.242391    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:08.253742    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:08.253751    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:08.268582    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:08.268594    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:08.287207    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:08.287220    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:08.323290    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:08.323303    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:08.338407    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:08.338420    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:08.365362    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:08.365377    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:08.399334    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:08.399348    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:08.419982    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:08.419994    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:08.440317    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:08.440330    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:08.474084    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:08.474095    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:10.978532    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:15.979642    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:15.980093    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:16.019867    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:16.020007    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:16.042238    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:16.042343    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:16.060584    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:18:16.060667    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:16.072925    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:16.072999    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:16.088073    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:16.088142    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:16.098420    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:16.098490    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:16.108667    5456 logs.go:276] 0 containers: []
	W0910 11:18:16.108681    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:16.108741    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:16.122110    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:16.122132    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:16.122138    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:16.133723    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:16.133736    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:16.145646    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:18:16.145659    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:18:16.162626    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:16.162640    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:16.167160    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:16.167169    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:16.200793    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:16.200805    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:16.215234    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:16.215246    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:16.240969    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:16.240977    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:16.253319    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:16.253333    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:16.287888    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:16.287898    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:16.301019    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:16.301030    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:16.315533    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:16.315545    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:16.332966    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:18:16.332979    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:18:16.344968    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:16.344978    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:16.356834    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:16.356842    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:18.873476    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:23.875394    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:23.875740    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:23.910779    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:23.910915    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:23.932770    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:23.932860    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:23.949138    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:18:23.949206    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:23.960726    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:23.960800    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:23.971342    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:23.971413    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:23.982722    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:23.982786    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:23.992957    5456 logs.go:276] 0 containers: []
	W0910 11:18:23.992969    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:23.993023    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:24.003391    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:24.003410    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:24.003416    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:24.023759    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:24.023768    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:24.049728    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:18:24.049744    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:18:24.063723    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:24.063736    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:24.079593    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:18:24.079605    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:18:24.091554    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:24.091563    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:24.103835    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:24.103843    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:24.115400    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:24.115409    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:24.130047    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:24.130056    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:24.134689    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:24.134701    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:24.168724    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:24.168736    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:24.181152    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:24.181164    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:24.192874    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:24.192885    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:24.204667    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:24.204676    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:24.239389    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:24.239399    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:26.758187    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:31.760767    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:31.761227    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:31.797843    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:31.797969    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:31.817381    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:31.817472    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:31.831975    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:18:31.832056    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:31.844403    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:31.844473    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:31.855606    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:31.855669    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:31.870560    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:31.870630    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:31.880597    5456 logs.go:276] 0 containers: []
	W0910 11:18:31.880606    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:31.880656    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:31.891352    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:31.891370    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:31.891375    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:31.906656    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:31.906667    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:31.924096    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:31.924106    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:31.935710    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:31.935719    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:31.949911    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:31.949921    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:31.961871    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:31.961885    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:31.974237    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:18:31.974247    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:18:31.985165    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:31.985176    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:32.019133    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:32.019143    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:32.056246    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:32.056262    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:32.076869    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:18:32.076881    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:18:32.088604    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:32.088616    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:32.099934    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:32.099946    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:32.111094    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:32.111106    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:32.115865    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:32.115874    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:34.640851    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:39.642065    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:39.642159    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:39.653781    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:39.653868    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:39.666743    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:39.666812    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:39.678639    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:18:39.678716    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:39.691532    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:39.691610    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:39.703707    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:39.703756    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:39.714768    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:39.714825    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:39.725865    5456 logs.go:276] 0 containers: []
	W0910 11:18:39.725877    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:39.725922    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:39.737941    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:39.737959    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:39.737968    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:39.743659    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:39.743670    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:39.761885    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:39.761896    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:39.787959    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:39.787974    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:39.824448    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:39.824461    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:39.841757    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:39.841767    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:39.855096    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:39.855108    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:39.868035    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:39.868046    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:39.887873    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:39.887888    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:39.901263    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:39.901274    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:39.936497    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:39.936512    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:39.955179    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:18:39.955193    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:18:39.969068    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:18:39.969080    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:18:39.981641    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:39.981652    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:39.994487    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:39.994499    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:42.509335    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:47.511630    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:47.511870    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:47.534368    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:47.534482    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:47.551435    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:47.551514    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:47.563810    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:18:47.563886    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:47.574877    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:47.574938    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:47.589070    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:47.589133    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:47.600312    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:47.600382    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:47.614185    5456 logs.go:276] 0 containers: []
	W0910 11:18:47.614196    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:47.614257    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:47.629046    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:47.629068    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:47.629074    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:47.640658    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:47.640670    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:47.664797    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:47.664808    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:47.678254    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:47.678265    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:47.689923    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:47.689934    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:47.712514    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:47.712524    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:47.725002    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:47.725013    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:47.743099    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:47.743109    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:47.778147    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:18:47.778162    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:18:47.789422    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:47.789431    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:47.800869    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:47.800881    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:47.804989    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:47.804996    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:47.816306    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:47.816319    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:47.849883    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:47.849891    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:47.863423    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:18:47.863434    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:18:50.377689    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:18:55.379239    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:18:55.379684    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:18:55.419163    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:18:55.419294    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:18:55.453218    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:18:55.453300    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:18:55.467999    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:18:55.468080    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:18:55.483156    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:18:55.483220    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:18:55.495590    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:18:55.495661    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:18:55.506259    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:18:55.506323    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:18:55.516987    5456 logs.go:276] 0 containers: []
	W0910 11:18:55.517000    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:18:55.517066    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:18:55.528640    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:18:55.528655    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:18:55.528662    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:18:55.546788    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:18:55.546801    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:18:55.570682    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:18:55.570689    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:18:55.582332    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:18:55.582346    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:18:55.586641    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:18:55.586648    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:18:55.599144    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:18:55.599154    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:18:55.610801    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:18:55.610812    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:18:55.646835    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:18:55.646847    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:18:55.664246    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:18:55.664258    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:18:55.679688    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:18:55.679701    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:18:55.691146    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:18:55.691158    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:18:55.725247    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:18:55.725257    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:18:55.739702    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:18:55.739715    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:18:55.758441    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:18:55.758455    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:18:55.772064    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:18:55.772077    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:18:58.285479    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:03.288144    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:03.288327    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:19:03.302092    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:19:03.302165    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:19:03.313339    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:19:03.313408    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:19:03.323752    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:19:03.323822    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:19:03.334302    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:19:03.334365    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:19:03.349589    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:19:03.349654    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:19:03.359650    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:19:03.359715    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:19:03.369692    5456 logs.go:276] 0 containers: []
	W0910 11:19:03.369703    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:19:03.369756    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:19:03.379798    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:19:03.379814    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:19:03.379820    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:19:03.414077    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:19:03.414085    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:19:03.428250    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:19:03.428260    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:19:03.439975    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:19:03.439984    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:19:03.451863    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:19:03.451874    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:19:03.470080    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:19:03.470091    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:19:03.475463    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:19:03.475475    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:19:03.494255    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:19:03.494266    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:19:03.519972    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:19:03.519989    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:19:03.532216    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:19:03.532229    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:19:03.570123    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:19:03.570137    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:19:03.585090    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:19:03.585103    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:19:03.598407    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:19:03.598419    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:19:03.611610    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:19:03.611622    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:19:03.624462    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:19:03.624473    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:19:06.139881    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:11.141964    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:11.142210    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:19:11.173328    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:19:11.173419    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:19:11.187943    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:19:11.188018    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:19:11.200531    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:19:11.200607    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:19:11.210942    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:19:11.211013    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:19:11.222139    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:19:11.222206    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:19:11.232348    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:19:11.232406    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:19:11.243072    5456 logs.go:276] 0 containers: []
	W0910 11:19:11.243083    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:19:11.243136    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:19:11.254572    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:19:11.254590    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:19:11.254595    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:19:11.276775    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:19:11.276785    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:19:11.301475    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:19:11.301484    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:19:11.313007    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:19:11.313017    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:19:11.326647    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:19:11.326659    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:19:11.345851    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:19:11.345863    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:19:11.357681    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:19:11.357695    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:19:11.361804    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:19:11.361813    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:19:11.380139    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:19:11.380152    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:19:11.392063    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:19:11.392073    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:19:11.407694    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:19:11.407706    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:19:11.419474    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:19:11.419486    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:19:11.434282    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:19:11.434293    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:19:11.446893    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:19:11.446905    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:19:11.481842    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:19:11.481853    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:19:14.056596    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:19.059235    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:19.059553    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:19:19.092144    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:19:19.092252    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:19:19.109902    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:19:19.109984    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:19:19.123006    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:19:19.123080    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:19:19.134654    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:19:19.134722    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:19:19.145299    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:19:19.145368    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:19:19.159495    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:19:19.159557    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:19:19.171441    5456 logs.go:276] 0 containers: []
	W0910 11:19:19.171453    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:19:19.171510    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:19:19.181409    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:19:19.181425    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:19:19.181431    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:19:19.214772    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:19:19.214782    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:19:19.219616    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:19:19.219623    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:19:19.231174    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:19:19.231186    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:19:19.242927    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:19:19.242941    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:19:19.257594    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:19:19.257604    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:19:19.268921    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:19:19.268931    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:19:19.289099    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:19:19.289110    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:19:19.325547    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:19:19.325557    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:19:19.340075    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:19:19.340088    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:19:19.352027    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:19:19.352039    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:19:19.368750    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:19:19.368761    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:19:19.385531    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:19:19.385548    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:19:19.396791    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:19:19.396801    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:19:19.411950    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:19:19.411961    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:19:21.937335    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:26.939629    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:26.940122    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:19:26.975526    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:19:26.975659    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:19:26.997070    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:19:26.997158    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:19:27.011961    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:19:27.012041    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:19:27.023875    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:19:27.023945    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:19:27.034477    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:19:27.034534    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:19:27.050488    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:19:27.050545    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:19:27.061015    5456 logs.go:276] 0 containers: []
	W0910 11:19:27.061027    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:19:27.061075    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:19:27.074242    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:19:27.074259    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:19:27.074264    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:19:27.086041    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:19:27.086057    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:19:27.120793    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:19:27.120801    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:19:27.125003    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:19:27.125011    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:19:27.163978    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:19:27.163992    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:19:27.178254    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:19:27.178267    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:19:27.192818    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:19:27.192828    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:19:27.206440    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:19:27.206452    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:19:27.218229    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:19:27.218240    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:19:27.229491    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:19:27.229504    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:19:27.243520    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:19:27.243531    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:19:27.258926    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:19:27.258939    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:19:27.270968    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:19:27.270982    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:19:27.282549    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:19:27.282561    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:19:27.300131    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:19:27.300142    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:19:29.825377    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:34.825689    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:34.826063    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:19:34.863158    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:19:34.863282    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:19:34.881274    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:19:34.881363    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:19:34.894706    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:19:34.894784    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:19:34.906278    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:19:34.906348    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:19:34.916737    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:19:34.916807    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:19:34.927012    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:19:34.927074    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:19:34.937067    5456 logs.go:276] 0 containers: []
	W0910 11:19:34.937079    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:19:34.937137    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:19:34.947402    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:19:34.947419    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:19:34.947425    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:19:34.983632    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:19:34.983646    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:19:34.995385    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:19:34.995399    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:19:35.007401    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:19:35.007413    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:19:35.011695    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:19:35.011701    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:19:35.025546    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:19:35.025556    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:19:35.039423    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:19:35.039432    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:19:35.052557    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:19:35.052572    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:19:35.071187    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:19:35.071197    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:19:35.097014    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:19:35.097035    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:19:35.109504    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:19:35.109515    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:19:35.145366    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:19:35.145377    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:19:35.156758    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:19:35.156769    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:19:35.174318    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:19:35.174329    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:19:35.185759    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:19:35.185768    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:19:37.697332    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:42.699432    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:42.699543    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:19:42.711713    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:19:42.711780    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:19:42.722290    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:19:42.722360    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:19:42.733012    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:19:42.733081    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:19:42.743535    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:19:42.743600    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:19:42.754382    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:19:42.754441    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:19:42.772095    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:19:42.772162    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:19:42.782421    5456 logs.go:276] 0 containers: []
	W0910 11:19:42.782430    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:19:42.782480    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:19:42.792476    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:19:42.792490    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:19:42.792495    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:19:42.809528    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:19:42.809539    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:19:42.825189    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:19:42.825202    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:19:42.829437    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:19:42.829446    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:19:42.865088    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:19:42.865100    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:19:42.879018    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:19:42.879030    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:19:42.897493    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:19:42.897503    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:19:42.909244    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:19:42.909255    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:19:42.920410    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:19:42.920422    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:19:42.953092    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:19:42.953104    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:19:42.966297    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:19:42.966307    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:19:42.977479    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:19:42.977491    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:19:42.988839    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:19:42.988852    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:19:43.004012    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:19:43.004025    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:19:43.028687    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:19:43.028697    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:19:45.542354    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:50.544803    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:50.545239    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0910 11:19:50.587323    5456 logs.go:276] 1 containers: [73521dd0cfb7]
	I0910 11:19:50.587444    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0910 11:19:50.607901    5456 logs.go:276] 1 containers: [8dfcec5f4da8]
	I0910 11:19:50.607994    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0910 11:19:50.623116    5456 logs.go:276] 4 containers: [712437a68314 078403c98d30 1052fb80b9f5 846243f826bf]
	I0910 11:19:50.623195    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0910 11:19:50.634969    5456 logs.go:276] 1 containers: [6b215cc88f63]
	I0910 11:19:50.635041    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0910 11:19:50.645472    5456 logs.go:276] 1 containers: [e146a047350b]
	I0910 11:19:50.645547    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0910 11:19:50.656711    5456 logs.go:276] 1 containers: [c8a0b8755183]
	I0910 11:19:50.656781    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0910 11:19:50.667383    5456 logs.go:276] 0 containers: []
	W0910 11:19:50.667394    5456 logs.go:278] No container was found matching "kindnet"
	I0910 11:19:50.667447    5456 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0910 11:19:50.677576    5456 logs.go:276] 1 containers: [56245aee5584]
	I0910 11:19:50.677593    5456 logs.go:123] Gathering logs for dmesg ...
	I0910 11:19:50.677599    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 11:19:50.682467    5456 logs.go:123] Gathering logs for describe nodes ...
	I0910 11:19:50.682476    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 11:19:50.716282    5456 logs.go:123] Gathering logs for coredns [1052fb80b9f5] ...
	I0910 11:19:50.716296    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1052fb80b9f5"
	I0910 11:19:50.729508    5456 logs.go:123] Gathering logs for kube-controller-manager [c8a0b8755183] ...
	I0910 11:19:50.729522    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a0b8755183"
	I0910 11:19:50.746939    5456 logs.go:123] Gathering logs for kube-apiserver [73521dd0cfb7] ...
	I0910 11:19:50.746948    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73521dd0cfb7"
	I0910 11:19:50.761059    5456 logs.go:123] Gathering logs for coredns [078403c98d30] ...
	I0910 11:19:50.761069    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078403c98d30"
	I0910 11:19:50.772169    5456 logs.go:123] Gathering logs for kube-scheduler [6b215cc88f63] ...
	I0910 11:19:50.772181    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b215cc88f63"
	I0910 11:19:50.786858    5456 logs.go:123] Gathering logs for coredns [712437a68314] ...
	I0910 11:19:50.786870    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712437a68314"
	I0910 11:19:50.798400    5456 logs.go:123] Gathering logs for Docker ...
	I0910 11:19:50.798414    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0910 11:19:50.821291    5456 logs.go:123] Gathering logs for kubelet ...
	I0910 11:19:50.821301    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 11:19:50.854463    5456 logs.go:123] Gathering logs for etcd [8dfcec5f4da8] ...
	I0910 11:19:50.854472    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfcec5f4da8"
	I0910 11:19:50.868172    5456 logs.go:123] Gathering logs for coredns [846243f826bf] ...
	I0910 11:19:50.868182    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 846243f826bf"
	I0910 11:19:50.879687    5456 logs.go:123] Gathering logs for kube-proxy [e146a047350b] ...
	I0910 11:19:50.879700    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e146a047350b"
	I0910 11:19:50.891605    5456 logs.go:123] Gathering logs for storage-provisioner [56245aee5584] ...
	I0910 11:19:50.891614    5456 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56245aee5584"
	I0910 11:19:50.903357    5456 logs.go:123] Gathering logs for container status ...
	I0910 11:19:50.903370    5456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 11:19:53.415104    5456 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0910 11:19:58.417142    5456 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0910 11:19:58.421606    5456 out.go:201] 
	W0910 11:19:58.425655    5456 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0910 11:19:58.425666    5456 out.go:270] * 
	* 
	W0910 11:19:58.426039    5456 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:19:58.443587    5456 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-163000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (572.72s)

                                                
                                    
x
+
TestPause/serial/Start (9.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-314000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0910 11:17:20.634081    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-314000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.808060083s)

                                                
                                                
-- stdout --
	* [pause-314000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-314000" primary control-plane node in "pause-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-314000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-314000 -n pause-314000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-314000 -n pause-314000: exit status 7 (52.517084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-314000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-606000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-606000 --driver=qemu2 : exit status 80 (9.790873833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-606000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-606000" primary control-plane node in "NoKubernetes-606000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-606000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-606000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000: exit status 7 (61.904667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2422055s)

                                                
                                                
-- stdout --
	* [NoKubernetes-606000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-606000
	* Restarting existing qemu2 VM for "NoKubernetes-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000: exit status 7 (67.608667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --driver=qemu2 : exit status 80 (5.237707584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-606000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-606000
	* Restarting existing qemu2 VM for "NoKubernetes-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000: exit status 7 (53.730167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-606000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-606000 --driver=qemu2 : exit status 80 (5.239497584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-606000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-606000
	* Restarting existing qemu2 VM for "NoKubernetes-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-606000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-606000 -n NoKubernetes-606000: exit status 7 (56.908625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.909008084s)

                                                
                                                
-- stdout --
	* [auto-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-425000" primary control-plane node in "auto-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:18:29.478884    5717 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:18:29.479021    5717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:18:29.479023    5717 out.go:358] Setting ErrFile to fd 2...
	I0910 11:18:29.479026    5717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:18:29.479146    5717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:18:29.480221    5717 out.go:352] Setting JSON to false
	I0910 11:18:29.496732    5717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4673,"bootTime":1725987636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:18:29.496797    5717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:18:29.503403    5717 out.go:177] * [auto-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:18:29.511317    5717 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:18:29.511359    5717 notify.go:220] Checking for updates...
	I0910 11:18:29.517278    5717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:18:29.520283    5717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:18:29.523330    5717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:18:29.526301    5717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:18:29.529291    5717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:18:29.532612    5717 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:18:29.532673    5717 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:18:29.532735    5717 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:18:29.537298    5717 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:18:29.544313    5717 start.go:297] selected driver: qemu2
	I0910 11:18:29.544317    5717 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:18:29.544322    5717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:18:29.546399    5717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:18:29.550277    5717 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:18:29.553406    5717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:18:29.553463    5717 cni.go:84] Creating CNI manager for ""
	I0910 11:18:29.553472    5717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:18:29.553476    5717 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:18:29.553507    5717 start.go:340] cluster config:
	{Name:auto-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:18:29.556828    5717 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:18:29.565270    5717 out.go:177] * Starting "auto-425000" primary control-plane node in "auto-425000" cluster
	I0910 11:18:29.569359    5717 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:18:29.569373    5717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:18:29.569379    5717 cache.go:56] Caching tarball of preloaded images
	I0910 11:18:29.569431    5717 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:18:29.569436    5717 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:18:29.569500    5717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/auto-425000/config.json ...
	I0910 11:18:29.569510    5717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/auto-425000/config.json: {Name:mkfe93dc4dc53dad443636a292b88ef9931f19dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:18:29.569920    5717 start.go:360] acquireMachinesLock for auto-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:18:29.569951    5717 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "auto-425000"
	I0910 11:18:29.569962    5717 start.go:93] Provisioning new machine with config: &{Name:auto-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:18:29.570035    5717 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:18:29.577370    5717 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:18:29.594050    5717 start.go:159] libmachine.API.Create for "auto-425000" (driver="qemu2")
	I0910 11:18:29.594072    5717 client.go:168] LocalClient.Create starting
	I0910 11:18:29.594138    5717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:18:29.594169    5717 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:29.594178    5717 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:29.594218    5717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:18:29.594245    5717 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:29.594256    5717 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:29.594769    5717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:18:29.756651    5717 main.go:141] libmachine: Creating SSH key...
	I0910 11:18:29.916547    5717 main.go:141] libmachine: Creating Disk image...
	I0910 11:18:29.916558    5717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:18:29.916808    5717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2
	I0910 11:18:29.926226    5717 main.go:141] libmachine: STDOUT: 
	I0910 11:18:29.926244    5717 main.go:141] libmachine: STDERR: 
	I0910 11:18:29.926295    5717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2 +20000M
	I0910 11:18:29.934801    5717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:18:29.934825    5717 main.go:141] libmachine: STDERR: 
	I0910 11:18:29.934845    5717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2
	I0910 11:18:29.934849    5717 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:18:29.934859    5717 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:18:29.934892    5717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:eb:0d:f6:f9:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2
	I0910 11:18:29.936655    5717 main.go:141] libmachine: STDOUT: 
	I0910 11:18:29.936671    5717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:18:29.936691    5717 client.go:171] duration metric: took 342.624666ms to LocalClient.Create
	I0910 11:18:31.938765    5717 start.go:128] duration metric: took 2.368787791s to createHost
	I0910 11:18:31.938784    5717 start.go:83] releasing machines lock for "auto-425000", held for 2.368892125s
	W0910 11:18:31.938802    5717 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:31.951212    5717 out.go:177] * Deleting "auto-425000" in qemu2 ...
	W0910 11:18:31.962180    5717 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:31.962190    5717 start.go:729] Will try again in 5 seconds ...
	I0910 11:18:36.964294    5717 start.go:360] acquireMachinesLock for auto-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:18:36.964908    5717 start.go:364] duration metric: took 504µs to acquireMachinesLock for "auto-425000"
	I0910 11:18:36.965025    5717 start.go:93] Provisioning new machine with config: &{Name:auto-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:18:36.965255    5717 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:18:36.974625    5717 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:18:37.026581    5717 start.go:159] libmachine.API.Create for "auto-425000" (driver="qemu2")
	I0910 11:18:37.026642    5717 client.go:168] LocalClient.Create starting
	I0910 11:18:37.026783    5717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:18:37.026850    5717 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:37.026866    5717 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:37.026936    5717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:18:37.026981    5717 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:37.026992    5717 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:37.027595    5717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:18:37.198781    5717 main.go:141] libmachine: Creating SSH key...
	I0910 11:18:37.293917    5717 main.go:141] libmachine: Creating Disk image...
	I0910 11:18:37.293924    5717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:18:37.294181    5717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2
	I0910 11:18:37.303560    5717 main.go:141] libmachine: STDOUT: 
	I0910 11:18:37.303578    5717 main.go:141] libmachine: STDERR: 
	I0910 11:18:37.303625    5717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2 +20000M
	I0910 11:18:37.311694    5717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:18:37.311708    5717 main.go:141] libmachine: STDERR: 
	I0910 11:18:37.311720    5717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2
	I0910 11:18:37.311724    5717 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:18:37.311737    5717 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:18:37.311778    5717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:cf:63:f4:9d:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/auto-425000/disk.qcow2
	I0910 11:18:37.313419    5717 main.go:141] libmachine: STDOUT: 
	I0910 11:18:37.313437    5717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:18:37.313452    5717 client.go:171] duration metric: took 286.809625ms to LocalClient.Create
	I0910 11:18:39.315705    5717 start.go:128] duration metric: took 2.350484916s to createHost
	I0910 11:18:39.315767    5717 start.go:83] releasing machines lock for "auto-425000", held for 2.350899s
	W0910 11:18:39.315927    5717 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:39.335345    5717 out.go:201] 
	W0910 11:18:39.336675    5717 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:18:39.336692    5717 out.go:270] * 
	* 
	W0910 11:18:39.337995    5717 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:18:39.346329    5717 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.875344958s)

                                                
                                                
-- stdout --
	* [kindnet-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-425000" primary control-plane node in "kindnet-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:18:41.541906    5829 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:18:41.542043    5829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:18:41.542046    5829 out.go:358] Setting ErrFile to fd 2...
	I0910 11:18:41.542056    5829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:18:41.542205    5829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:18:41.543341    5829 out.go:352] Setting JSON to false
	I0910 11:18:41.559695    5829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4685,"bootTime":1725987636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:18:41.559773    5829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:18:41.566479    5829 out.go:177] * [kindnet-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:18:41.570496    5829 notify.go:220] Checking for updates...
	I0910 11:18:41.574352    5829 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:18:41.579399    5829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:18:41.582354    5829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:18:41.586322    5829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:18:41.589395    5829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:18:41.592420    5829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:18:41.595645    5829 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:18:41.595714    5829 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:18:41.595768    5829 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:18:41.599402    5829 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:18:41.606349    5829 start.go:297] selected driver: qemu2
	I0910 11:18:41.606353    5829 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:18:41.606358    5829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:18:41.608675    5829 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:18:41.611433    5829 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:18:41.612946    5829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:18:41.612988    5829 cni.go:84] Creating CNI manager for "kindnet"
	I0910 11:18:41.612993    5829 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 11:18:41.613023    5829 start.go:340] cluster config:
	{Name:kindnet-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:18:41.616658    5829 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:18:41.628348    5829 out.go:177] * Starting "kindnet-425000" primary control-plane node in "kindnet-425000" cluster
	I0910 11:18:41.632410    5829 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:18:41.632425    5829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:18:41.632431    5829 cache.go:56] Caching tarball of preloaded images
	I0910 11:18:41.632501    5829 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:18:41.632507    5829 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:18:41.632567    5829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/kindnet-425000/config.json ...
	I0910 11:18:41.632579    5829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/kindnet-425000/config.json: {Name:mka5486ab6599ee1f7ace1da44c515b70339e4f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:18:41.632791    5829 start.go:360] acquireMachinesLock for kindnet-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:18:41.632820    5829 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "kindnet-425000"
	I0910 11:18:41.632831    5829 start.go:93] Provisioning new machine with config: &{Name:kindnet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:18:41.632864    5829 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:18:41.641362    5829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:18:41.656362    5829 start.go:159] libmachine.API.Create for "kindnet-425000" (driver="qemu2")
	I0910 11:18:41.656388    5829 client.go:168] LocalClient.Create starting
	I0910 11:18:41.656461    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:18:41.656491    5829 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:41.656501    5829 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:41.656533    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:18:41.656558    5829 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:41.656568    5829 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:41.656911    5829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:18:41.816438    5829 main.go:141] libmachine: Creating SSH key...
	I0910 11:18:41.872663    5829 main.go:141] libmachine: Creating Disk image...
	I0910 11:18:41.872669    5829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:18:41.872901    5829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2
	I0910 11:18:41.882058    5829 main.go:141] libmachine: STDOUT: 
	I0910 11:18:41.882076    5829 main.go:141] libmachine: STDERR: 
	I0910 11:18:41.882138    5829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2 +20000M
	I0910 11:18:41.890348    5829 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:18:41.890363    5829 main.go:141] libmachine: STDERR: 
	I0910 11:18:41.890375    5829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2
	I0910 11:18:41.890381    5829 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:18:41.890399    5829 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:18:41.890427    5829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:e1:c1:48:bf:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2
	I0910 11:18:41.892037    5829 main.go:141] libmachine: STDOUT: 
	I0910 11:18:41.892061    5829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:18:41.892084    5829 client.go:171] duration metric: took 235.699416ms to LocalClient.Create
	I0910 11:18:43.894296    5829 start.go:128] duration metric: took 2.261463542s to createHost
	I0910 11:18:43.894403    5829 start.go:83] releasing machines lock for "kindnet-425000", held for 2.261633166s
	W0910 11:18:43.894502    5829 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:43.912873    5829 out.go:177] * Deleting "kindnet-425000" in qemu2 ...
	W0910 11:18:43.943938    5829 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:43.943969    5829 start.go:729] Will try again in 5 seconds ...
	I0910 11:18:48.946033    5829 start.go:360] acquireMachinesLock for kindnet-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:18:48.946620    5829 start.go:364] duration metric: took 491.459µs to acquireMachinesLock for "kindnet-425000"
	I0910 11:18:48.946730    5829 start.go:93] Provisioning new machine with config: &{Name:kindnet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:18:48.946957    5829 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:18:48.952595    5829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:18:48.996443    5829 start.go:159] libmachine.API.Create for "kindnet-425000" (driver="qemu2")
	I0910 11:18:48.996495    5829 client.go:168] LocalClient.Create starting
	I0910 11:18:48.996618    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:18:48.996685    5829 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:48.996699    5829 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:48.996755    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:18:48.996794    5829 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:48.996813    5829 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:48.997497    5829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:18:49.164255    5829 main.go:141] libmachine: Creating SSH key...
	I0910 11:18:49.322651    5829 main.go:141] libmachine: Creating Disk image...
	I0910 11:18:49.322663    5829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:18:49.322944    5829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2
	I0910 11:18:49.332472    5829 main.go:141] libmachine: STDOUT: 
	I0910 11:18:49.332489    5829 main.go:141] libmachine: STDERR: 
	I0910 11:18:49.332545    5829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2 +20000M
	I0910 11:18:49.340424    5829 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:18:49.340439    5829 main.go:141] libmachine: STDERR: 
	I0910 11:18:49.340450    5829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2
	I0910 11:18:49.340453    5829 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:18:49.340466    5829 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:18:49.340510    5829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:48:2a:16:44:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kindnet-425000/disk.qcow2
	I0910 11:18:49.342172    5829 main.go:141] libmachine: STDOUT: 
	I0910 11:18:49.342194    5829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:18:49.342207    5829 client.go:171] duration metric: took 345.714834ms to LocalClient.Create
	I0910 11:18:51.344364    5829 start.go:128] duration metric: took 2.39743475s to createHost
	I0910 11:18:51.344459    5829 start.go:83] releasing machines lock for "kindnet-425000", held for 2.39784675s
	W0910 11:18:51.344822    5829 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:51.354435    5829 out.go:201] 
	W0910 11:18:51.362627    5829 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:18:51.362667    5829 out.go:270] * 
	* 
	W0910 11:18:51.365521    5829 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:18:51.374439    5829 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.835127958s)

                                                
                                                
-- stdout --
	* [calico-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-425000" primary control-plane node in "calico-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:18:53.638200    5948 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:18:53.638335    5948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:18:53.638338    5948 out.go:358] Setting ErrFile to fd 2...
	I0910 11:18:53.638341    5948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:18:53.638472    5948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:18:53.639546    5948 out.go:352] Setting JSON to false
	I0910 11:18:53.656119    5948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4697,"bootTime":1725987636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:18:53.656188    5948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:18:53.662956    5948 out.go:177] * [calico-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:18:53.670899    5948 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:18:53.670949    5948 notify.go:220] Checking for updates...
	I0910 11:18:53.678756    5948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:18:53.681836    5948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:18:53.684862    5948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:18:53.687858    5948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:18:53.690844    5948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:18:53.694156    5948 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:18:53.694222    5948 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:18:53.694280    5948 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:18:53.697768    5948 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:18:53.704843    5948 start.go:297] selected driver: qemu2
	I0910 11:18:53.704848    5948 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:18:53.704853    5948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:18:53.707188    5948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:18:53.709843    5948 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:18:53.712894    5948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:18:53.712942    5948 cni.go:84] Creating CNI manager for "calico"
	I0910 11:18:53.712946    5948 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0910 11:18:53.712980    5948 start.go:340] cluster config:
	{Name:calico-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:18:53.716757    5948 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:18:53.724754    5948 out.go:177] * Starting "calico-425000" primary control-plane node in "calico-425000" cluster
	I0910 11:18:53.728860    5948 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:18:53.728878    5948 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:18:53.728887    5948 cache.go:56] Caching tarball of preloaded images
	I0910 11:18:53.728954    5948 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:18:53.728959    5948 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:18:53.729013    5948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/calico-425000/config.json ...
	I0910 11:18:53.729028    5948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/calico-425000/config.json: {Name:mk02c678f41d8527547eeb9a05e791a3b4fa3bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:18:53.729247    5948 start.go:360] acquireMachinesLock for calico-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:18:53.729281    5948 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "calico-425000"
	I0910 11:18:53.729293    5948 start.go:93] Provisioning new machine with config: &{Name:calico-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:18:53.729332    5948 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:18:53.736799    5948 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:18:53.752019    5948 start.go:159] libmachine.API.Create for "calico-425000" (driver="qemu2")
	I0910 11:18:53.752044    5948 client.go:168] LocalClient.Create starting
	I0910 11:18:53.752111    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:18:53.752142    5948 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:53.752152    5948 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:53.752187    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:18:53.752210    5948 main.go:141] libmachine: Decoding PEM data...
	I0910 11:18:53.752219    5948 main.go:141] libmachine: Parsing certificate...
	I0910 11:18:53.752665    5948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:18:53.913665    5948 main.go:141] libmachine: Creating SSH key...
	I0910 11:18:54.036715    5948 main.go:141] libmachine: Creating Disk image...
	I0910 11:18:54.036721    5948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:18:54.036960    5948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2
	I0910 11:18:54.046135    5948 main.go:141] libmachine: STDOUT: 
	I0910 11:18:54.046157    5948 main.go:141] libmachine: STDERR: 
	I0910 11:18:54.046220    5948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2 +20000M
	I0910 11:18:54.054733    5948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:18:54.054756    5948 main.go:141] libmachine: STDERR: 
	I0910 11:18:54.054768    5948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2
	I0910 11:18:54.054772    5948 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:18:54.054789    5948 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:18:54.054819    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ea:f3:cb:f4:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2
	I0910 11:18:54.056728    5948 main.go:141] libmachine: STDOUT: 
	I0910 11:18:54.056744    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:18:54.056764    5948 client.go:171] duration metric: took 304.720708ms to LocalClient.Create
	I0910 11:18:56.058275    5948 start.go:128] duration metric: took 2.328993333s to createHost
	I0910 11:18:56.058341    5948 start.go:83] releasing machines lock for "calico-425000", held for 2.329115125s
	W0910 11:18:56.058377    5948 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:56.063535    5948 out.go:177] * Deleting "calico-425000" in qemu2 ...
	W0910 11:18:56.092597    5948 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:18:56.092612    5948 start.go:729] Will try again in 5 seconds ...
	I0910 11:19:01.094699    5948 start.go:360] acquireMachinesLock for calico-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:01.095203    5948 start.go:364] duration metric: took 387.625µs to acquireMachinesLock for "calico-425000"
	I0910 11:19:01.095310    5948 start.go:93] Provisioning new machine with config: &{Name:calico-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:01.095631    5948 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:01.104218    5948 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:01.145264    5948 start.go:159] libmachine.API.Create for "calico-425000" (driver="qemu2")
	I0910 11:19:01.145311    5948 client.go:168] LocalClient.Create starting
	I0910 11:19:01.145438    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:01.145503    5948 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:01.145519    5948 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:01.145600    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:01.145640    5948 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:01.145652    5948 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:01.146132    5948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:01.313615    5948 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:01.390756    5948 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:01.390764    5948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:01.391030    5948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2
	I0910 11:19:01.400419    5948 main.go:141] libmachine: STDOUT: 
	I0910 11:19:01.400440    5948 main.go:141] libmachine: STDERR: 
	I0910 11:19:01.400494    5948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2 +20000M
	I0910 11:19:01.408429    5948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:01.408446    5948 main.go:141] libmachine: STDERR: 
	I0910 11:19:01.408468    5948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2
	I0910 11:19:01.408473    5948 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:01.408484    5948 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:01.408510    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:05:28:85:0e:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/calico-425000/disk.qcow2
	I0910 11:19:01.410133    5948 main.go:141] libmachine: STDOUT: 
	I0910 11:19:01.410150    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:01.410162    5948 client.go:171] duration metric: took 264.853375ms to LocalClient.Create
	I0910 11:19:03.412166    5948 start.go:128] duration metric: took 2.316571s to createHost
	I0910 11:19:03.412184    5948 start.go:83] releasing machines lock for "calico-425000", held for 2.317024917s
	W0910 11:19:03.412250    5948 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:03.420497    5948 out.go:201] 
	W0910 11:19:03.425556    5948 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:19:03.425565    5948 out.go:270] * 
	* 
	W0910 11:19:03.426135    5948 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:19:03.436424    5948 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.926997541s)

                                                
                                                
-- stdout --
	* [custom-flannel-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-425000" primary control-plane node in "custom-flannel-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:19:05.846250    6066 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:19:05.846390    6066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:05.846393    6066 out.go:358] Setting ErrFile to fd 2...
	I0910 11:19:05.846396    6066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:05.846526    6066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:19:05.847651    6066 out.go:352] Setting JSON to false
	I0910 11:19:05.864058    6066 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4709,"bootTime":1725987636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:19:05.864129    6066 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:19:05.871347    6066 out.go:177] * [custom-flannel-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:19:05.879270    6066 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:19:05.879312    6066 notify.go:220] Checking for updates...
	I0910 11:19:05.886255    6066 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:19:05.889185    6066 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:19:05.892293    6066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:19:05.895289    6066 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:19:05.898199    6066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:19:05.901580    6066 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:19:05.901646    6066 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:19:05.901702    6066 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:19:05.905305    6066 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:19:05.912229    6066 start.go:297] selected driver: qemu2
	I0910 11:19:05.912240    6066 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:19:05.912247    6066 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:19:05.914433    6066 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:19:05.922275    6066 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:19:05.925252    6066 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:19:05.925287    6066 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0910 11:19:05.925295    6066 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0910 11:19:05.925321    6066 start.go:340] cluster config:
	{Name:custom-flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:19:05.928757    6066 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:19:05.936105    6066 out.go:177] * Starting "custom-flannel-425000" primary control-plane node in "custom-flannel-425000" cluster
	I0910 11:19:05.940255    6066 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:19:05.940270    6066 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:19:05.940277    6066 cache.go:56] Caching tarball of preloaded images
	I0910 11:19:05.940341    6066 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:19:05.940346    6066 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:19:05.940408    6066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/custom-flannel-425000/config.json ...
	I0910 11:19:05.940418    6066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/custom-flannel-425000/config.json: {Name:mk11e4ed96c34dd3304aa2e6dd5d45384f890c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:19:05.940636    6066 start.go:360] acquireMachinesLock for custom-flannel-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:05.940668    6066 start.go:364] duration metric: took 25.917µs to acquireMachinesLock for "custom-flannel-425000"
	I0910 11:19:05.940680    6066 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:05.940706    6066 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:05.948233    6066 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:05.964844    6066 start.go:159] libmachine.API.Create for "custom-flannel-425000" (driver="qemu2")
	I0910 11:19:05.964866    6066 client.go:168] LocalClient.Create starting
	I0910 11:19:05.964951    6066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:05.964990    6066 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:05.965000    6066 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:05.965049    6066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:05.965072    6066 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:05.965079    6066 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:05.965425    6066 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:06.126754    6066 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:06.266880    6066 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:06.266890    6066 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:06.267143    6066 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0910 11:19:06.276546    6066 main.go:141] libmachine: STDOUT: 
	I0910 11:19:06.276566    6066 main.go:141] libmachine: STDERR: 
	I0910 11:19:06.276631    6066 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2 +20000M
	I0910 11:19:06.284547    6066 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:06.284575    6066 main.go:141] libmachine: STDERR: 
	I0910 11:19:06.284596    6066 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0910 11:19:06.284601    6066 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:06.284612    6066 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:06.284641    6066 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:98:9e:48:2a:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0910 11:19:06.286375    6066 main.go:141] libmachine: STDOUT: 
	I0910 11:19:06.286394    6066 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:06.286421    6066 client.go:171] duration metric: took 321.556333ms to LocalClient.Create
	I0910 11:19:08.288624    6066 start.go:128] duration metric: took 2.347950375s to createHost
	I0910 11:19:08.288711    6066 start.go:83] releasing machines lock for "custom-flannel-425000", held for 2.348095917s
	W0910 11:19:08.288772    6066 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:08.299047    6066 out.go:177] * Deleting "custom-flannel-425000" in qemu2 ...
	W0910 11:19:08.338148    6066 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:08.338177    6066 start.go:729] Will try again in 5 seconds ...
	I0910 11:19:13.340260    6066 start.go:360] acquireMachinesLock for custom-flannel-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:13.340795    6066 start.go:364] duration metric: took 428.459µs to acquireMachinesLock for "custom-flannel-425000"
	I0910 11:19:13.340970    6066 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:13.341289    6066 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:13.350852    6066 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:13.399786    6066 start.go:159] libmachine.API.Create for "custom-flannel-425000" (driver="qemu2")
	I0910 11:19:13.399845    6066 client.go:168] LocalClient.Create starting
	I0910 11:19:13.399980    6066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:13.400039    6066 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:13.400053    6066 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:13.400113    6066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:13.400156    6066 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:13.400166    6066 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:13.400783    6066 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:13.571146    6066 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:13.676401    6066 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:13.676411    6066 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:13.676656    6066 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0910 11:19:13.686091    6066 main.go:141] libmachine: STDOUT: 
	I0910 11:19:13.686110    6066 main.go:141] libmachine: STDERR: 
	I0910 11:19:13.686165    6066 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2 +20000M
	I0910 11:19:13.694043    6066 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:13.694059    6066 main.go:141] libmachine: STDERR: 
	I0910 11:19:13.694077    6066 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0910 11:19:13.694083    6066 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:13.694092    6066 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:13.694124    6066 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:bd:62:c4:ea:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0910 11:19:13.695737    6066 main.go:141] libmachine: STDOUT: 
	I0910 11:19:13.695754    6066 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:13.695769    6066 client.go:171] duration metric: took 295.925875ms to LocalClient.Create
	I0910 11:19:15.697822    6066 start.go:128] duration metric: took 2.356560125s to createHost
	I0910 11:19:15.697876    6066 start.go:83] releasing machines lock for "custom-flannel-425000", held for 2.357114s
	W0910 11:19:15.698012    6066 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:15.716799    6066 out.go:201] 
	W0910 11:19:15.720940    6066 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:19:15.720949    6066 out.go:270] * 
	* 
	W0910 11:19:15.721771    6066 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:19:15.734667    6066 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.852728042s)

                                                
                                                
-- stdout --
	* [false-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-425000" primary control-plane node in "false-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:19:18.096893    6184 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:19:18.097028    6184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:18.097032    6184 out.go:358] Setting ErrFile to fd 2...
	I0910 11:19:18.097034    6184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:18.097162    6184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:19:18.098356    6184 out.go:352] Setting JSON to false
	I0910 11:19:18.115118    6184 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4722,"bootTime":1725987636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:19:18.115184    6184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:19:18.122370    6184 out.go:177] * [false-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:19:18.131294    6184 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:19:18.131336    6184 notify.go:220] Checking for updates...
	I0910 11:19:18.137761    6184 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:19:18.141228    6184 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:19:18.144229    6184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:19:18.147226    6184 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:19:18.150200    6184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:19:18.153581    6184 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:19:18.153649    6184 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:19:18.153691    6184 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:19:18.158185    6184 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:19:18.165248    6184 start.go:297] selected driver: qemu2
	I0910 11:19:18.165254    6184 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:19:18.165262    6184 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:19:18.167516    6184 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:19:18.171212    6184 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:19:18.174336    6184 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:19:18.174381    6184 cni.go:84] Creating CNI manager for "false"
	I0910 11:19:18.174408    6184 start.go:340] cluster config:
	{Name:false-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:19:18.178373    6184 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:19:18.186109    6184 out.go:177] * Starting "false-425000" primary control-plane node in "false-425000" cluster
	I0910 11:19:18.190159    6184 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:19:18.190172    6184 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:19:18.190178    6184 cache.go:56] Caching tarball of preloaded images
	I0910 11:19:18.190234    6184 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:19:18.190239    6184 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:19:18.190295    6184 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/false-425000/config.json ...
	I0910 11:19:18.190306    6184 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/false-425000/config.json: {Name:mkd597e333d9d454a99346bbbede9a35611d8fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:19:18.190522    6184 start.go:360] acquireMachinesLock for false-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:18.190556    6184 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "false-425000"
	I0910 11:19:18.190569    6184 start.go:93] Provisioning new machine with config: &{Name:false-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:18.190594    6184 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:18.198176    6184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:18.215711    6184 start.go:159] libmachine.API.Create for "false-425000" (driver="qemu2")
	I0910 11:19:18.215744    6184 client.go:168] LocalClient.Create starting
	I0910 11:19:18.215813    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:18.215845    6184 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:18.215854    6184 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:18.215893    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:18.215916    6184 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:18.215926    6184 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:18.216279    6184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:18.376222    6184 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:18.471836    6184 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:18.471847    6184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:18.472108    6184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2
	I0910 11:19:18.481288    6184 main.go:141] libmachine: STDOUT: 
	I0910 11:19:18.481321    6184 main.go:141] libmachine: STDERR: 
	I0910 11:19:18.481378    6184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2 +20000M
	I0910 11:19:18.489305    6184 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:18.489321    6184 main.go:141] libmachine: STDERR: 
	I0910 11:19:18.489338    6184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2
	I0910 11:19:18.489343    6184 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:18.489366    6184 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:18.489393    6184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b5:f1:d3:ec:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2
	I0910 11:19:18.491027    6184 main.go:141] libmachine: STDOUT: 
	I0910 11:19:18.491044    6184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:18.491067    6184 client.go:171] duration metric: took 275.326458ms to LocalClient.Create
	I0910 11:19:20.493220    6184 start.go:128] duration metric: took 2.30265575s to createHost
	I0910 11:19:20.493289    6184 start.go:83] releasing machines lock for "false-425000", held for 2.302783375s
	W0910 11:19:20.493409    6184 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:20.508551    6184 out.go:177] * Deleting "false-425000" in qemu2 ...
	W0910 11:19:20.543451    6184 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:20.543482    6184 start.go:729] Will try again in 5 seconds ...
	I0910 11:19:25.545646    6184 start.go:360] acquireMachinesLock for false-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:25.546176    6184 start.go:364] duration metric: took 420.166µs to acquireMachinesLock for "false-425000"
	I0910 11:19:25.546349    6184 start.go:93] Provisioning new machine with config: &{Name:false-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:25.546666    6184 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:25.552397    6184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:25.599950    6184 start.go:159] libmachine.API.Create for "false-425000" (driver="qemu2")
	I0910 11:19:25.599999    6184 client.go:168] LocalClient.Create starting
	I0910 11:19:25.600130    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:25.600204    6184 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:25.600220    6184 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:25.600279    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:25.600325    6184 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:25.600343    6184 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:25.600881    6184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:25.767682    6184 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:25.850981    6184 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:25.850987    6184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:25.851218    6184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2
	I0910 11:19:25.860613    6184 main.go:141] libmachine: STDOUT: 
	I0910 11:19:25.860641    6184 main.go:141] libmachine: STDERR: 
	I0910 11:19:25.860690    6184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2 +20000M
	I0910 11:19:25.868960    6184 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:25.868976    6184 main.go:141] libmachine: STDERR: 
	I0910 11:19:25.868988    6184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2
	I0910 11:19:25.868992    6184 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:25.869005    6184 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:25.869039    6184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:fe:04:91:df:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/false-425000/disk.qcow2
	I0910 11:19:25.870693    6184 main.go:141] libmachine: STDOUT: 
	I0910 11:19:25.870718    6184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:25.870731    6184 client.go:171] duration metric: took 270.734291ms to LocalClient.Create
	I0910 11:19:27.872961    6184 start.go:128] duration metric: took 2.326288583s to createHost
	I0910 11:19:27.873107    6184 start.go:83] releasing machines lock for "false-425000", held for 2.326921666s
	W0910 11:19:27.873461    6184 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:27.891333    6184 out.go:201] 
	W0910 11:19:27.895272    6184 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:19:27.895328    6184 out.go:270] * 
	* 
	W0910 11:19:27.898064    6184 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:19:27.910140    6184 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.874931792s)

                                                
                                                
-- stdout --
	* [enable-default-cni-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-425000" primary control-plane node in "enable-default-cni-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:19:30.099825    6296 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:19:30.099953    6296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:30.099957    6296 out.go:358] Setting ErrFile to fd 2...
	I0910 11:19:30.099959    6296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:30.100091    6296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:19:30.101166    6296 out.go:352] Setting JSON to false
	I0910 11:19:30.117709    6296 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4734,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:19:30.117774    6296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:19:30.123965    6296 out.go:177] * [enable-default-cni-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:19:30.132861    6296 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:19:30.132948    6296 notify.go:220] Checking for updates...
	I0910 11:19:30.139843    6296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:19:30.142881    6296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:19:30.145897    6296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:19:30.148785    6296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:19:30.151864    6296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:19:30.155126    6296 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:19:30.155196    6296 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:19:30.155249    6296 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:19:30.158837    6296 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:19:30.164808    6296 start.go:297] selected driver: qemu2
	I0910 11:19:30.164814    6296 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:19:30.164821    6296 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:19:30.167241    6296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:19:30.169853    6296 out.go:177] * Automatically selected the socket_vmnet network
	E0910 11:19:30.172975    6296 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0910 11:19:30.172993    6296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:19:30.173025    6296 cni.go:84] Creating CNI manager for "bridge"
	I0910 11:19:30.173029    6296 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:19:30.173054    6296 start.go:340] cluster config:
	{Name:enable-default-cni-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:19:30.177045    6296 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:19:30.184894    6296 out.go:177] * Starting "enable-default-cni-425000" primary control-plane node in "enable-default-cni-425000" cluster
	I0910 11:19:30.188929    6296 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:19:30.188945    6296 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:19:30.188958    6296 cache.go:56] Caching tarball of preloaded images
	I0910 11:19:30.189028    6296 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:19:30.189040    6296 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:19:30.189105    6296 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/enable-default-cni-425000/config.json ...
	I0910 11:19:30.189116    6296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/enable-default-cni-425000/config.json: {Name:mk1292f60b917614153ec92006265c3f0bf7b251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:19:30.189363    6296 start.go:360] acquireMachinesLock for enable-default-cni-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:30.189401    6296 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "enable-default-cni-425000"
	I0910 11:19:30.189414    6296 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:30.189446    6296 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:30.196867    6296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:30.214489    6296 start.go:159] libmachine.API.Create for "enable-default-cni-425000" (driver="qemu2")
	I0910 11:19:30.214514    6296 client.go:168] LocalClient.Create starting
	I0910 11:19:30.214576    6296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:30.214611    6296 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:30.214620    6296 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:30.214660    6296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:30.214684    6296 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:30.214693    6296 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:30.215150    6296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:30.376404    6296 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:30.444819    6296 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:30.444826    6296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:30.445096    6296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0910 11:19:30.454248    6296 main.go:141] libmachine: STDOUT: 
	I0910 11:19:30.454266    6296 main.go:141] libmachine: STDERR: 
	I0910 11:19:30.454320    6296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2 +20000M
	I0910 11:19:30.462237    6296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:30.462252    6296 main.go:141] libmachine: STDERR: 
	I0910 11:19:30.462262    6296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0910 11:19:30.462268    6296 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:30.462283    6296 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:30.462308    6296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b4:0b:ae:39:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0910 11:19:30.463869    6296 main.go:141] libmachine: STDOUT: 
	I0910 11:19:30.463885    6296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:30.463906    6296 client.go:171] duration metric: took 249.393416ms to LocalClient.Create
	I0910 11:19:32.466114    6296 start.go:128] duration metric: took 2.276698s to createHost
	I0910 11:19:32.466171    6296 start.go:83] releasing machines lock for "enable-default-cni-425000", held for 2.276822916s
	W0910 11:19:32.466238    6296 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:32.477611    6296 out.go:177] * Deleting "enable-default-cni-425000" in qemu2 ...
	W0910 11:19:32.505572    6296 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:32.505599    6296 start.go:729] Will try again in 5 seconds ...
	I0910 11:19:37.507207    6296 start.go:360] acquireMachinesLock for enable-default-cni-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:37.507534    6296 start.go:364] duration metric: took 243.959µs to acquireMachinesLock for "enable-default-cni-425000"
	I0910 11:19:37.507628    6296 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:37.507773    6296 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:37.511232    6296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:37.545651    6296 start.go:159] libmachine.API.Create for "enable-default-cni-425000" (driver="qemu2")
	I0910 11:19:37.545701    6296 client.go:168] LocalClient.Create starting
	I0910 11:19:37.545809    6296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:37.545876    6296 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:37.545892    6296 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:37.545944    6296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:37.545984    6296 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:37.545996    6296 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:37.546479    6296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:37.714843    6296 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:37.880682    6296 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:37.880693    6296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:37.880960    6296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0910 11:19:37.890528    6296 main.go:141] libmachine: STDOUT: 
	I0910 11:19:37.890558    6296 main.go:141] libmachine: STDERR: 
	I0910 11:19:37.890630    6296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2 +20000M
	I0910 11:19:37.898675    6296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:37.898696    6296 main.go:141] libmachine: STDERR: 
	I0910 11:19:37.898709    6296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0910 11:19:37.898714    6296 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:37.898725    6296 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:37.898756    6296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:3a:2f:e9:9d:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0910 11:19:37.900459    6296 main.go:141] libmachine: STDOUT: 
	I0910 11:19:37.900475    6296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:37.900488    6296 client.go:171] duration metric: took 354.792209ms to LocalClient.Create
	I0910 11:19:39.902666    6296 start.go:128] duration metric: took 2.394923459s to createHost
	I0910 11:19:39.902768    6296 start.go:83] releasing machines lock for "enable-default-cni-425000", held for 2.395280458s
	W0910 11:19:39.903133    6296 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:39.917746    6296 out.go:201] 
	W0910 11:19:39.922893    6296 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:19:39.922964    6296 out.go:270] * 
	* 
	W0910 11:19:39.926014    6296 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:19:39.934677    6296 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.004303916s)

                                                
                                                
-- stdout --
	* [flannel-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-425000" primary control-plane node in "flannel-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:19:42.147552    6408 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:19:42.147690    6408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:42.147693    6408 out.go:358] Setting ErrFile to fd 2...
	I0910 11:19:42.147695    6408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:42.147827    6408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:19:42.148915    6408 out.go:352] Setting JSON to false
	I0910 11:19:42.165318    6408 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4746,"bootTime":1725987636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:19:42.165394    6408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:19:42.172460    6408 out.go:177] * [flannel-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:19:42.180354    6408 notify.go:220] Checking for updates...
	I0910 11:19:42.185293    6408 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:19:42.193274    6408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:19:42.197256    6408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:19:42.201251    6408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:19:42.204244    6408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:19:42.207239    6408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:19:42.210621    6408 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:19:42.210693    6408 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:19:42.210742    6408 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:19:42.215306    6408 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:19:42.222278    6408 start.go:297] selected driver: qemu2
	I0910 11:19:42.222285    6408 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:19:42.222293    6408 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:19:42.224807    6408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:19:42.228285    6408 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:19:42.231369    6408 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:19:42.231412    6408 cni.go:84] Creating CNI manager for "flannel"
	I0910 11:19:42.231416    6408 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0910 11:19:42.231453    6408 start.go:340] cluster config:
	{Name:flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:19:42.235441    6408 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:19:42.243287    6408 out.go:177] * Starting "flannel-425000" primary control-plane node in "flannel-425000" cluster
	I0910 11:19:42.246195    6408 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:19:42.246210    6408 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:19:42.246220    6408 cache.go:56] Caching tarball of preloaded images
	I0910 11:19:42.246274    6408 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:19:42.246280    6408 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:19:42.246358    6408 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/flannel-425000/config.json ...
	I0910 11:19:42.246370    6408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/flannel-425000/config.json: {Name:mkb5cadefac8e37bdfd9bce1435ee5d2f6b10b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:19:42.246591    6408 start.go:360] acquireMachinesLock for flannel-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:42.246623    6408 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "flannel-425000"
	I0910 11:19:42.246636    6408 start.go:93] Provisioning new machine with config: &{Name:flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:42.246663    6408 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:42.253230    6408 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:42.270275    6408 start.go:159] libmachine.API.Create for "flannel-425000" (driver="qemu2")
	I0910 11:19:42.270297    6408 client.go:168] LocalClient.Create starting
	I0910 11:19:42.270357    6408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:42.270391    6408 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:42.270400    6408 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:42.270436    6408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:42.270467    6408 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:42.270475    6408 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:42.270846    6408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:42.432985    6408 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:42.628411    6408 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:42.628425    6408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:42.628714    6408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2
	I0910 11:19:42.638584    6408 main.go:141] libmachine: STDOUT: 
	I0910 11:19:42.638605    6408 main.go:141] libmachine: STDERR: 
	I0910 11:19:42.638651    6408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2 +20000M
	I0910 11:19:42.646736    6408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:42.646756    6408 main.go:141] libmachine: STDERR: 
	I0910 11:19:42.646770    6408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2
	I0910 11:19:42.646774    6408 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:42.646788    6408 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:42.646814    6408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d7:f5:5d:91:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2
	I0910 11:19:42.648514    6408 main.go:141] libmachine: STDOUT: 
	I0910 11:19:42.648531    6408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:42.648551    6408 client.go:171] duration metric: took 378.2595ms to LocalClient.Create
	I0910 11:19:44.650711    6408 start.go:128] duration metric: took 2.404079709s to createHost
	I0910 11:19:44.650863    6408 start.go:83] releasing machines lock for "flannel-425000", held for 2.404274459s
	W0910 11:19:44.650947    6408 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:44.664432    6408 out.go:177] * Deleting "flannel-425000" in qemu2 ...
	W0910 11:19:44.697728    6408 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:44.697755    6408 start.go:729] Will try again in 5 seconds ...
	I0910 11:19:49.699866    6408 start.go:360] acquireMachinesLock for flannel-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:49.700512    6408 start.go:364] duration metric: took 462.125µs to acquireMachinesLock for "flannel-425000"
	I0910 11:19:49.700667    6408 start.go:93] Provisioning new machine with config: &{Name:flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:49.701000    6408 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:49.706679    6408 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:49.757691    6408 start.go:159] libmachine.API.Create for "flannel-425000" (driver="qemu2")
	I0910 11:19:49.757739    6408 client.go:168] LocalClient.Create starting
	I0910 11:19:49.757862    6408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:49.757938    6408 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:49.757955    6408 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:49.758021    6408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:49.758075    6408 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:49.758089    6408 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:49.758661    6408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:49.925981    6408 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:50.061704    6408 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:50.061714    6408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:50.061970    6408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2
	I0910 11:19:50.071800    6408 main.go:141] libmachine: STDOUT: 
	I0910 11:19:50.071822    6408 main.go:141] libmachine: STDERR: 
	I0910 11:19:50.071872    6408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2 +20000M
	I0910 11:19:50.079820    6408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:50.079835    6408 main.go:141] libmachine: STDERR: 
	I0910 11:19:50.079853    6408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2
	I0910 11:19:50.079860    6408 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:50.079870    6408 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:50.079897    6408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:9c:35:c0:f4:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/flannel-425000/disk.qcow2
	I0910 11:19:50.081658    6408 main.go:141] libmachine: STDOUT: 
	I0910 11:19:50.081676    6408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:50.081689    6408 client.go:171] duration metric: took 323.952083ms to LocalClient.Create
	I0910 11:19:52.083831    6408 start.go:128] duration metric: took 2.382862167s to createHost
	I0910 11:19:52.083892    6408 start.go:83] releasing machines lock for "flannel-425000", held for 2.38342075s
	W0910 11:19:52.084092    6408 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:52.097612    6408 out.go:201] 
	W0910 11:19:52.101913    6408 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:19:52.101931    6408 out.go:270] * 
	* 
	W0910 11:19:52.103018    6408 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:19:52.114481    6408 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.995442875s)

                                                
                                                
-- stdout --
	* [bridge-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-425000" primary control-plane node in "bridge-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:19:54.488984    6531 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:19:54.489123    6531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:54.489127    6531 out.go:358] Setting ErrFile to fd 2...
	I0910 11:19:54.489129    6531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:19:54.489277    6531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:19:54.490442    6531 out.go:352] Setting JSON to false
	I0910 11:19:54.507538    6531 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4758,"bootTime":1725987636,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:19:54.507632    6531 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:19:54.515209    6531 out.go:177] * [bridge-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:19:54.522936    6531 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:19:54.523033    6531 notify.go:220] Checking for updates...
	I0910 11:19:54.530918    6531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:19:54.533968    6531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:19:54.537011    6531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:19:54.539937    6531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:19:54.542961    6531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:19:54.546385    6531 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:19:54.546449    6531 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:19:54.546494    6531 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:19:54.550956    6531 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:19:54.557972    6531 start.go:297] selected driver: qemu2
	I0910 11:19:54.557979    6531 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:19:54.557985    6531 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:19:54.560293    6531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:19:54.562943    6531 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:19:54.566048    6531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:19:54.566081    6531 cni.go:84] Creating CNI manager for "bridge"
	I0910 11:19:54.566085    6531 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:19:54.566112    6531 start.go:340] cluster config:
	{Name:bridge-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:19:54.569639    6531 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:19:54.576932    6531 out.go:177] * Starting "bridge-425000" primary control-plane node in "bridge-425000" cluster
	I0910 11:19:54.580983    6531 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:19:54.581000    6531 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:19:54.581008    6531 cache.go:56] Caching tarball of preloaded images
	I0910 11:19:54.581071    6531 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:19:54.581077    6531 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:19:54.581135    6531 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/bridge-425000/config.json ...
	I0910 11:19:54.581146    6531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/bridge-425000/config.json: {Name:mk3d51f64f7a6501e6799c109f9ab47c6f03d30c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:19:54.581346    6531 start.go:360] acquireMachinesLock for bridge-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:19:54.581377    6531 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "bridge-425000"
	I0910 11:19:54.581388    6531 start.go:93] Provisioning new machine with config: &{Name:bridge-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:19:54.581421    6531 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:19:54.585920    6531 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:19:54.601846    6531 start.go:159] libmachine.API.Create for "bridge-425000" (driver="qemu2")
	I0910 11:19:54.601876    6531 client.go:168] LocalClient.Create starting
	I0910 11:19:54.601955    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:19:54.601987    6531 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:54.601998    6531 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:54.602034    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:19:54.602063    6531 main.go:141] libmachine: Decoding PEM data...
	I0910 11:19:54.602071    6531 main.go:141] libmachine: Parsing certificate...
	I0910 11:19:54.602468    6531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:19:54.762929    6531 main.go:141] libmachine: Creating SSH key...
	I0910 11:19:55.009198    6531 main.go:141] libmachine: Creating Disk image...
	I0910 11:19:55.009210    6531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:19:55.009518    6531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2
	I0910 11:19:55.019784    6531 main.go:141] libmachine: STDOUT: 
	I0910 11:19:55.019809    6531 main.go:141] libmachine: STDERR: 
	I0910 11:19:55.019862    6531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2 +20000M
	I0910 11:19:55.028255    6531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:19:55.028271    6531 main.go:141] libmachine: STDERR: 
	I0910 11:19:55.028283    6531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2
	I0910 11:19:55.028289    6531 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:19:55.028305    6531 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:19:55.028329    6531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:32:5b:7a:fd:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2
	I0910 11:19:55.030102    6531 main.go:141] libmachine: STDOUT: 
	I0910 11:19:55.030119    6531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:19:55.030140    6531 client.go:171] duration metric: took 428.269167ms to LocalClient.Create
	I0910 11:19:57.032344    6531 start.go:128] duration metric: took 2.450954666s to createHost
	I0910 11:19:57.032440    6531 start.go:83] releasing machines lock for "bridge-425000", held for 2.451118458s
	W0910 11:19:57.032522    6531 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:57.039849    6531 out.go:177] * Deleting "bridge-425000" in qemu2 ...
	W0910 11:19:57.074464    6531 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:19:57.074492    6531 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:02.076683    6531 start.go:360] acquireMachinesLock for bridge-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:02.077280    6531 start.go:364] duration metric: took 474.541µs to acquireMachinesLock for "bridge-425000"
	I0910 11:20:02.077364    6531 start.go:93] Provisioning new machine with config: &{Name:bridge-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:02.077596    6531 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:02.088188    6531 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:20:02.128443    6531 start.go:159] libmachine.API.Create for "bridge-425000" (driver="qemu2")
	I0910 11:20:02.128507    6531 client.go:168] LocalClient.Create starting
	I0910 11:20:02.128629    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:02.128693    6531 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:02.128708    6531 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:02.128771    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:02.128808    6531 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:02.128818    6531 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:02.129333    6531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:02.298193    6531 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:02.387196    6531 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:02.387203    6531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:02.387456    6531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2
	I0910 11:20:02.396822    6531 main.go:141] libmachine: STDOUT: 
	I0910 11:20:02.396840    6531 main.go:141] libmachine: STDERR: 
	I0910 11:20:02.396887    6531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2 +20000M
	I0910 11:20:02.404993    6531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:02.405011    6531 main.go:141] libmachine: STDERR: 
	I0910 11:20:02.405029    6531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2
	I0910 11:20:02.405034    6531 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:02.405049    6531 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:02.405073    6531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d8:70:b7:8d:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/bridge-425000/disk.qcow2
	I0910 11:20:02.406744    6531 main.go:141] libmachine: STDOUT: 
	I0910 11:20:02.406762    6531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:02.406776    6531 client.go:171] duration metric: took 278.272292ms to LocalClient.Create
	I0910 11:20:04.408899    6531 start.go:128] duration metric: took 2.331338333s to createHost
	I0910 11:20:04.408979    6531 start.go:83] releasing machines lock for "bridge-425000", held for 2.331731417s
	W0910 11:20:04.409272    6531 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:04.419773    6531 out.go:201] 
	W0910 11:20:04.428906    6531 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:04.428961    6531 out.go:270] * 
	* 
	W0910 11:20:04.431425    6531 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:20:04.440696    6531 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0910 11:20:10.578386    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.930464375s)

                                                
                                                
-- stdout --
	* [kubenet-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-425000" primary control-plane node in "kubenet-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:06.638126    6644 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:06.638283    6644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:06.638286    6644 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:06.638288    6644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:06.638414    6644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:06.639570    6644 out.go:352] Setting JSON to false
	I0910 11:20:06.655770    6644 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4770,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:06.655844    6644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:06.662182    6644 out.go:177] * [kubenet-425000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:06.670240    6644 notify.go:220] Checking for updates...
	I0910 11:20:06.675062    6644 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:06.676657    6644 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:06.680052    6644 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:06.683084    6644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:06.686086    6644 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:06.689047    6644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:06.692406    6644 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:06.692467    6644 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:20:06.692532    6644 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:06.697068    6644 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:20:06.704109    6644 start.go:297] selected driver: qemu2
	I0910 11:20:06.704115    6644 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:20:06.704120    6644 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:06.706252    6644 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:20:06.710069    6644 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:20:06.713246    6644 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:06.713278    6644 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0910 11:20:06.713314    6644 start.go:340] cluster config:
	{Name:kubenet-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:06.716743    6644 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:06.724081    6644 out.go:177] * Starting "kubenet-425000" primary control-plane node in "kubenet-425000" cluster
	I0910 11:20:06.728075    6644 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:20:06.728086    6644 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:20:06.728091    6644 cache.go:56] Caching tarball of preloaded images
	I0910 11:20:06.728140    6644 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:20:06.728145    6644 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:20:06.728195    6644 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/kubenet-425000/config.json ...
	I0910 11:20:06.728205    6644 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/kubenet-425000/config.json: {Name:mke3fa741c6bb0c133b6e188bdd1f7291c924d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:20:06.728409    6644 start.go:360] acquireMachinesLock for kubenet-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:06.728438    6644 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "kubenet-425000"
	I0910 11:20:06.728450    6644 start.go:93] Provisioning new machine with config: &{Name:kubenet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:06.728484    6644 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:06.736096    6644 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:20:06.751328    6644 start.go:159] libmachine.API.Create for "kubenet-425000" (driver="qemu2")
	I0910 11:20:06.751359    6644 client.go:168] LocalClient.Create starting
	I0910 11:20:06.751442    6644 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:06.751473    6644 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:06.751482    6644 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:06.751521    6644 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:06.751544    6644 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:06.751549    6644 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:06.751924    6644 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:06.912191    6644 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:06.981286    6644 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:06.981292    6644 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:06.981530    6644 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2
	I0910 11:20:06.990327    6644 main.go:141] libmachine: STDOUT: 
	I0910 11:20:06.990347    6644 main.go:141] libmachine: STDERR: 
	I0910 11:20:06.990392    6644 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2 +20000M
	I0910 11:20:06.998475    6644 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:06.998492    6644 main.go:141] libmachine: STDERR: 
	I0910 11:20:06.998509    6644 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2
	I0910 11:20:06.998513    6644 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:06.998526    6644 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:06.998551    6644 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:65:2f:6f:67:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2
	I0910 11:20:07.000186    6644 main.go:141] libmachine: STDOUT: 
	I0910 11:20:07.000197    6644 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:07.000217    6644 client.go:171] duration metric: took 248.861291ms to LocalClient.Create
	I0910 11:20:09.002404    6644 start.go:128] duration metric: took 2.273950292s to createHost
	I0910 11:20:09.002521    6644 start.go:83] releasing machines lock for "kubenet-425000", held for 2.2741325s
	W0910 11:20:09.002596    6644 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:09.019723    6644 out.go:177] * Deleting "kubenet-425000" in qemu2 ...
	W0910 11:20:09.051281    6644 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:09.051311    6644 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:14.053464    6644 start.go:360] acquireMachinesLock for kubenet-425000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:14.054049    6644 start.go:364] duration metric: took 449.041µs to acquireMachinesLock for "kubenet-425000"
	I0910 11:20:14.054202    6644 start.go:93] Provisioning new machine with config: &{Name:kubenet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:14.054504    6644 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:14.060175    6644 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 11:20:14.104342    6644 start.go:159] libmachine.API.Create for "kubenet-425000" (driver="qemu2")
	I0910 11:20:14.104395    6644 client.go:168] LocalClient.Create starting
	I0910 11:20:14.104514    6644 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:14.104564    6644 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:14.104576    6644 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:14.104635    6644 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:14.104673    6644 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:14.104690    6644 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:14.105350    6644 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:14.281851    6644 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:14.478543    6644 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:14.478557    6644 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:14.478884    6644 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2
	I0910 11:20:14.488713    6644 main.go:141] libmachine: STDOUT: 
	I0910 11:20:14.488734    6644 main.go:141] libmachine: STDERR: 
	I0910 11:20:14.488779    6644 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2 +20000M
	I0910 11:20:14.496999    6644 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:14.497013    6644 main.go:141] libmachine: STDERR: 
	I0910 11:20:14.497022    6644 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2
	I0910 11:20:14.497027    6644 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:14.497038    6644 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:14.497060    6644 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:f0:80:7e:96:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/kubenet-425000/disk.qcow2
	I0910 11:20:14.498758    6644 main.go:141] libmachine: STDOUT: 
	I0910 11:20:14.498772    6644 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:14.498783    6644 client.go:171] duration metric: took 394.392334ms to LocalClient.Create
	I0910 11:20:16.500827    6644 start.go:128] duration metric: took 2.446369292s to createHost
	I0910 11:20:16.500863    6644 start.go:83] releasing machines lock for "kubenet-425000", held for 2.446859167s
	W0910 11:20:16.500994    6644 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:16.507330    6644 out.go:201] 
	W0910 11:20:16.511290    6644 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:16.511300    6644 out.go:270] * 
	* 
	W0910 11:20:16.512064    6644 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:20:16.523315    6644 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-497000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-497000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.883658833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-497000" primary control-plane node in "old-k8s-version-497000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-497000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:18.681234    6759 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:18.681349    6759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:18.681352    6759 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:18.681355    6759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:18.681473    6759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:18.682553    6759 out.go:352] Setting JSON to false
	I0910 11:20:18.699016    6759 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4782,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:18.699093    6759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:18.706719    6759 out.go:177] * [old-k8s-version-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:18.714579    6759 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:18.714647    6759 notify.go:220] Checking for updates...
	I0910 11:20:18.721417    6759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:18.724519    6759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:18.727529    6759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:18.728965    6759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:18.731506    6759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:18.734848    6759 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:18.734915    6759 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0910 11:20:18.734955    6759 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:18.739401    6759 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:20:18.746504    6759 start.go:297] selected driver: qemu2
	I0910 11:20:18.746509    6759 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:20:18.746515    6759 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:18.748711    6759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:20:18.751541    6759 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:20:18.754563    6759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:18.754591    6759 cni.go:84] Creating CNI manager for ""
	I0910 11:20:18.754596    6759 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 11:20:18.754622    6759 start.go:340] cluster config:
	{Name:old-k8s-version-497000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:18.758169    6759 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:18.766431    6759 out.go:177] * Starting "old-k8s-version-497000" primary control-plane node in "old-k8s-version-497000" cluster
	I0910 11:20:18.770564    6759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 11:20:18.770584    6759 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 11:20:18.770590    6759 cache.go:56] Caching tarball of preloaded images
	I0910 11:20:18.770665    6759 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:20:18.770671    6759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0910 11:20:18.770726    6759 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/old-k8s-version-497000/config.json ...
	I0910 11:20:18.770741    6759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/old-k8s-version-497000/config.json: {Name:mk23bfb57fd234a305b43619b1241edf45a7a64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:20:18.770940    6759 start.go:360] acquireMachinesLock for old-k8s-version-497000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:18.770971    6759 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "old-k8s-version-497000"
	I0910 11:20:18.770983    6759 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:18.771008    6759 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:18.779522    6759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:18.795224    6759 start.go:159] libmachine.API.Create for "old-k8s-version-497000" (driver="qemu2")
	I0910 11:20:18.795247    6759 client.go:168] LocalClient.Create starting
	I0910 11:20:18.795318    6759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:18.795351    6759 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:18.795359    6759 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:18.795395    6759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:18.795418    6759 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:18.795425    6759 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:18.795856    6759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:18.958278    6759 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:19.107718    6759 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:19.107726    6759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:19.107995    6759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:19.117833    6759 main.go:141] libmachine: STDOUT: 
	I0910 11:20:19.117854    6759 main.go:141] libmachine: STDERR: 
	I0910 11:20:19.117920    6759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2 +20000M
	I0910 11:20:19.125859    6759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:19.125873    6759 main.go:141] libmachine: STDERR: 
	I0910 11:20:19.125887    6759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:19.125891    6759 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:19.125912    6759 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:19.125935    6759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:66:10:05:1d:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:19.127588    6759 main.go:141] libmachine: STDOUT: 
	I0910 11:20:19.127605    6759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:19.127626    6759 client.go:171] duration metric: took 332.383708ms to LocalClient.Create
	I0910 11:20:21.129816    6759 start.go:128] duration metric: took 2.358844042s to createHost
	I0910 11:20:21.129890    6759 start.go:83] releasing machines lock for "old-k8s-version-497000", held for 2.358972917s
	W0910 11:20:21.129955    6759 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:21.144093    6759 out.go:177] * Deleting "old-k8s-version-497000" in qemu2 ...
	W0910 11:20:21.174915    6759 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:21.174943    6759 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:26.176945    6759 start.go:360] acquireMachinesLock for old-k8s-version-497000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:26.177176    6759 start.go:364] duration metric: took 173.625µs to acquireMachinesLock for "old-k8s-version-497000"
	I0910 11:20:26.177235    6759 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:26.177334    6759 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:26.186718    6759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:26.209342    6759 start.go:159] libmachine.API.Create for "old-k8s-version-497000" (driver="qemu2")
	I0910 11:20:26.209383    6759 client.go:168] LocalClient.Create starting
	I0910 11:20:26.209459    6759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:26.209508    6759 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:26.209520    6759 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:26.209560    6759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:26.209591    6759 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:26.209599    6759 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:26.209939    6759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:26.369958    6759 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:26.470250    6759 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:26.470258    6759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:26.470503    6759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:26.479972    6759 main.go:141] libmachine: STDOUT: 
	I0910 11:20:26.479991    6759 main.go:141] libmachine: STDERR: 
	I0910 11:20:26.480038    6759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2 +20000M
	I0910 11:20:26.488316    6759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:26.488343    6759 main.go:141] libmachine: STDERR: 
	I0910 11:20:26.488357    6759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:26.488370    6759 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:26.488377    6759 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:26.488403    6759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:05:b7:60:08:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:26.490059    6759 main.go:141] libmachine: STDOUT: 
	I0910 11:20:26.490073    6759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:26.490085    6759 client.go:171] duration metric: took 280.704959ms to LocalClient.Create
	I0910 11:20:28.492214    6759 start.go:128] duration metric: took 2.314905791s to createHost
	I0910 11:20:28.492270    6759 start.go:83] releasing machines lock for "old-k8s-version-497000", held for 2.315145583s
	W0910 11:20:28.492569    6759 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:28.506331    6759 out.go:201] 
	W0910 11:20:28.510286    6759 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:28.510313    6759 out.go:270] * 
	* 
	W0910 11:20:28.512063    6759 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:20:28.523230    6759 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-497000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (54.286167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-497000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-497000 create -f testdata/busybox.yaml: exit status 1 (29.399083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-497000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-497000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (30.138334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (29.67825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-497000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-497000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-497000 describe deploy/metrics-server -n kube-system: exit status 1 (26.922583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-497000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-497000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (30.473667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-738000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-738000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.086362959s)

                                                
                                                
-- stdout --
	* [no-preload-738000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-738000" primary control-plane node in "no-preload-738000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-738000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:31.651731    6807 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:31.651867    6807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:31.651870    6807 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:31.651873    6807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:31.652003    6807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:31.653116    6807 out.go:352] Setting JSON to false
	I0910 11:20:31.669280    6807 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4795,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:31.669352    6807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:31.673817    6807 out.go:177] * [no-preload-738000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:31.681733    6807 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:31.681756    6807 notify.go:220] Checking for updates...
	I0910 11:20:31.688772    6807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:31.691645    6807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:31.694800    6807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:31.697835    6807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:31.700738    6807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:31.704045    6807 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:31.704122    6807 config.go:182] Loaded profile config "old-k8s-version-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0910 11:20:31.704178    6807 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:31.707752    6807 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:20:31.714761    6807 start.go:297] selected driver: qemu2
	I0910 11:20:31.714767    6807 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:20:31.714773    6807 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:31.717139    6807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:20:31.720780    6807 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:20:31.723757    6807 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:31.723799    6807 cni.go:84] Creating CNI manager for ""
	I0910 11:20:31.723806    6807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:20:31.723811    6807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:20:31.723859    6807 start.go:340] cluster config:
	{Name:no-preload-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:31.727599    6807 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.734800    6807 out.go:177] * Starting "no-preload-738000" primary control-plane node in "no-preload-738000" cluster
	I0910 11:20:31.738781    6807 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:20:31.738888    6807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/no-preload-738000/config.json ...
	I0910 11:20:31.738895    6807 cache.go:107] acquiring lock: {Name:mk06ce94e3b7e3ca8885184edeca4f7e5645ca7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.738914    6807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/no-preload-738000/config.json: {Name:mkda67df3240b46b2d373b30d120200e84af335f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:20:31.738904    6807 cache.go:107] acquiring lock: {Name:mkc72d636ab182fcf861e1e15e2606b32c64a9a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.738911    6807 cache.go:107] acquiring lock: {Name:mke7024365ca08d407bef3b5b845f73ef105af20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.738967    6807 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0910 11:20:31.738976    6807 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.084µs
	I0910 11:20:31.738974    6807 cache.go:107] acquiring lock: {Name:mk971764c5b139461a306725ae5e6036e89bc73e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.738982    6807 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0910 11:20:31.738965    6807 cache.go:107] acquiring lock: {Name:mk8d7ec58511be189d6e47e5a53f0f631eb45385 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.739055    6807 cache.go:107] acquiring lock: {Name:mkc3e627fb73e259c37d748d5e6df7162f5e3b43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.739090    6807 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 11:20:31.739093    6807 cache.go:107] acquiring lock: {Name:mk86d5bead7159d19bbcbbfcd8f52d4ea3bfb1ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.739147    6807 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 11:20:31.739153    6807 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 11:20:31.739194    6807 cache.go:107] acquiring lock: {Name:mk18824a930e97209272e5cf2e5cc5380cc03b98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:31.739367    6807 start.go:360] acquireMachinesLock for no-preload-738000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:31.739373    6807 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0910 11:20:31.739388    6807 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 11:20:31.739408    6807 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 11:20:31.739415    6807 start.go:364] duration metric: took 41.375µs to acquireMachinesLock for "no-preload-738000"
	I0910 11:20:31.739429    6807 start.go:93] Provisioning new machine with config: &{Name:no-preload-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:31.739458    6807 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:31.739572    6807 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 11:20:31.747714    6807 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:31.751004    6807 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 11:20:31.751037    6807 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 11:20:31.751001    6807 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0910 11:20:31.753306    6807 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 11:20:31.753333    6807 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 11:20:31.753365    6807 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 11:20:31.753396    6807 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 11:20:31.766337    6807 start.go:159] libmachine.API.Create for "no-preload-738000" (driver="qemu2")
	I0910 11:20:31.766360    6807 client.go:168] LocalClient.Create starting
	I0910 11:20:31.766440    6807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:31.766476    6807 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:31.766485    6807 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:31.766529    6807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:31.766552    6807 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:31.766562    6807 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:31.766984    6807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:31.944097    6807 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:32.058830    6807 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:32.058847    6807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:32.059069    6807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:32.068193    6807 main.go:141] libmachine: STDOUT: 
	I0910 11:20:32.068215    6807 main.go:141] libmachine: STDERR: 
	I0910 11:20:32.068261    6807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2 +20000M
	I0910 11:20:32.076305    6807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:32.076344    6807 main.go:141] libmachine: STDERR: 
	I0910 11:20:32.076361    6807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:32.076366    6807 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:32.076379    6807 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:32.076415    6807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:64:29:8d:bf:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:32.078117    6807 main.go:141] libmachine: STDOUT: 
	I0910 11:20:32.078136    6807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:32.078155    6807 client.go:171] duration metric: took 311.798458ms to LocalClient.Create
	I0910 11:20:32.644151    6807 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0910 11:20:32.691729    6807 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0910 11:20:32.706026    6807 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0910 11:20:32.709040    6807 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0910 11:20:32.812204    6807 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0910 11:20:32.844245    6807 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0910 11:20:32.845182    6807 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0910 11:20:32.845212    6807 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 1.106233292s
	I0910 11:20:32.845233    6807 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0910 11:20:32.856649    6807 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0910 11:20:34.078301    6807 start.go:128] duration metric: took 2.3388765s to createHost
	I0910 11:20:34.078381    6807 start.go:83] releasing machines lock for "no-preload-738000", held for 2.33901825s
	W0910 11:20:34.078459    6807 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:34.101428    6807 out.go:177] * Deleting "no-preload-738000" in qemu2 ...
	W0910 11:20:34.135383    6807 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:34.135416    6807 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:35.146112    6807 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0910 11:20:35.146167    6807 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 3.407084s
	I0910 11:20:35.146227    6807 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0910 11:20:35.832203    6807 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0910 11:20:35.832258    6807 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.093383083s
	I0910 11:20:35.832285    6807 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0910 11:20:36.032955    6807 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0910 11:20:36.033005    6807 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.294064s
	I0910 11:20:36.033034    6807 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0910 11:20:36.717079    6807 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0910 11:20:36.717143    6807 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.978373292s
	I0910 11:20:36.717174    6807 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0910 11:20:37.245576    6807 cache.go:157] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0910 11:20:37.245635    6807 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 5.506878833s
	I0910 11:20:37.245675    6807 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0910 11:20:39.135453    6807 start.go:360] acquireMachinesLock for no-preload-738000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:39.144516    6807 start.go:364] duration metric: took 9.007292ms to acquireMachinesLock for "no-preload-738000"
	I0910 11:20:39.144581    6807 start.go:93] Provisioning new machine with config: &{Name:no-preload-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:39.144793    6807 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:39.155351    6807 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:39.205498    6807 start.go:159] libmachine.API.Create for "no-preload-738000" (driver="qemu2")
	I0910 11:20:39.205573    6807 client.go:168] LocalClient.Create starting
	I0910 11:20:39.205693    6807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:39.205768    6807 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:39.205789    6807 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:39.205868    6807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:39.205912    6807 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:39.205930    6807 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:39.206445    6807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:39.379559    6807 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:39.642285    6807 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:39.642297    6807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:39.642520    6807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:39.652694    6807 main.go:141] libmachine: STDOUT: 
	I0910 11:20:39.652726    6807 main.go:141] libmachine: STDERR: 
	I0910 11:20:39.652793    6807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2 +20000M
	I0910 11:20:39.661909    6807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:39.661945    6807 main.go:141] libmachine: STDERR: 
	I0910 11:20:39.661967    6807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:39.661970    6807 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:39.661984    6807 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:39.662025    6807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:87:c1:ee:68:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:39.664094    6807 main.go:141] libmachine: STDOUT: 
	I0910 11:20:39.664112    6807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:39.664127    6807 client.go:171] duration metric: took 458.560917ms to LocalClient.Create
	I0910 11:20:41.665811    6807 start.go:128] duration metric: took 2.521037708s to createHost
	I0910 11:20:41.665880    6807 start.go:83] releasing machines lock for "no-preload-738000", held for 2.521402375s
	W0910 11:20:41.666136    6807 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-738000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-738000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:41.679973    6807 out.go:201] 
	W0910 11:20:41.684133    6807 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:41.684171    6807 out.go:270] * 
	* 
	W0910 11:20:41.686981    6807 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:20:41.695028    6807 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-738000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (50.852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-497000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-497000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (6.890923125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-497000" primary control-plane node in "old-k8s-version-497000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-497000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-497000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:32.316563    6851 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:32.316720    6851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:32.316723    6851 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:32.316725    6851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:32.316858    6851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:32.317891    6851 out.go:352] Setting JSON to false
	I0910 11:20:32.333841    6851 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4796,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:32.333937    6851 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:32.338810    6851 out.go:177] * [old-k8s-version-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:32.345766    6851 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:32.345804    6851 notify.go:220] Checking for updates...
	I0910 11:20:32.352713    6851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:32.355742    6851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:32.358849    6851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:32.361700    6851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:32.364725    6851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:32.367959    6851 config.go:182] Loaded profile config "old-k8s-version-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0910 11:20:32.371704    6851 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 11:20:32.374726    6851 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:32.379757    6851 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:20:32.386663    6851 start.go:297] selected driver: qemu2
	I0910 11:20:32.386669    6851 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:32.386727    6851 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:32.389030    6851 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:32.389059    6851 cni.go:84] Creating CNI manager for ""
	I0910 11:20:32.389066    6851 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 11:20:32.389102    6851 start.go:340] cluster config:
	{Name:old-k8s-version-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:32.392846    6851 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:32.400733    6851 out.go:177] * Starting "old-k8s-version-497000" primary control-plane node in "old-k8s-version-497000" cluster
	I0910 11:20:32.405730    6851 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 11:20:32.405762    6851 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 11:20:32.405767    6851 cache.go:56] Caching tarball of preloaded images
	I0910 11:20:32.405843    6851 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:20:32.405848    6851 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0910 11:20:32.405901    6851 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/old-k8s-version-497000/config.json ...
	I0910 11:20:32.406530    6851 start.go:360] acquireMachinesLock for old-k8s-version-497000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:34.078517    6851 start.go:364] duration metric: took 1.67197575s to acquireMachinesLock for "old-k8s-version-497000"
	I0910 11:20:34.078631    6851 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:20:34.078689    6851 fix.go:54] fixHost starting: 
	I0910 11:20:34.079392    6851 fix.go:112] recreateIfNeeded on old-k8s-version-497000: state=Stopped err=<nil>
	W0910 11:20:34.079445    6851 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:20:34.084377    6851 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-497000" ...
	I0910 11:20:34.105488    6851 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:34.105718    6851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:05:b7:60:08:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:34.117556    6851 main.go:141] libmachine: STDOUT: 
	I0910 11:20:34.117643    6851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:34.117769    6851 fix.go:56] duration metric: took 39.11375ms for fixHost
	I0910 11:20:34.117787    6851 start.go:83] releasing machines lock for "old-k8s-version-497000", held for 39.236542ms
	W0910 11:20:34.117827    6851 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:34.117989    6851 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:34.118008    6851 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:39.118233    6851 start.go:360] acquireMachinesLock for old-k8s-version-497000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:39.118787    6851 start.go:364] duration metric: took 405.583µs to acquireMachinesLock for "old-k8s-version-497000"
	I0910 11:20:39.118914    6851 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:20:39.118936    6851 fix.go:54] fixHost starting: 
	I0910 11:20:39.119708    6851 fix.go:112] recreateIfNeeded on old-k8s-version-497000: state=Stopped err=<nil>
	W0910 11:20:39.119739    6851 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:20:39.125495    6851 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-497000" ...
	I0910 11:20:39.134311    6851 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:39.134556    6851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:05:b7:60:08:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/old-k8s-version-497000/disk.qcow2
	I0910 11:20:39.144259    6851 main.go:141] libmachine: STDOUT: 
	I0910 11:20:39.144331    6851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:39.144420    6851 fix.go:56] duration metric: took 25.486792ms for fixHost
	I0910 11:20:39.144442    6851 start.go:83] releasing machines lock for "old-k8s-version-497000", held for 25.625417ms
	W0910 11:20:39.144689    6851 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-497000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-497000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:39.155339    6851 out.go:201] 
	W0910 11:20:39.158449    6851 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:39.158482    6851 out.go:270] * 
	* 
	W0910 11:20:39.161318    6851 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:20:39.169290    6851 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-497000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (53.39625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-497000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (34.572625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-497000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-497000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-497000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.716583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-497000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-497000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (33.722375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-497000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (30.267667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-497000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-497000 --alsologtostderr -v=1: exit status 83 (51.079459ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-497000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-497000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:39.449241    6873 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:39.449619    6873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:39.449630    6873 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:39.449632    6873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:39.449772    6873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:39.449981    6873 out.go:352] Setting JSON to false
	I0910 11:20:39.449989    6873 mustload.go:65] Loading cluster: old-k8s-version-497000
	I0910 11:20:39.450197    6873 config.go:182] Loaded profile config "old-k8s-version-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0910 11:20:39.455269    6873 out.go:177] * The control-plane node old-k8s-version-497000 host is not running: state=Stopped
	I0910 11:20:39.467307    6873 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-497000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-497000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (29.471208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (29.722167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-155000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-155000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (11.660440958s)

                                                
                                                
-- stdout --
	* [embed-certs-155000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-155000" primary control-plane node in "embed-certs-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:39.787168    6893 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:39.787268    6893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:39.787271    6893 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:39.787274    6893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:39.787403    6893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:39.788508    6893 out.go:352] Setting JSON to false
	I0910 11:20:39.804604    6893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4803,"bootTime":1725987636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:39.804686    6893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:39.809278    6893 out.go:177] * [embed-certs-155000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:39.817334    6893 notify.go:220] Checking for updates...
	I0910 11:20:39.820222    6893 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:39.827164    6893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:39.835291    6893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:39.839180    6893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:39.846232    6893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:39.850258    6893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:39.853602    6893 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:39.853662    6893 config.go:182] Loaded profile config "no-preload-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:39.853711    6893 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:39.858243    6893 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:20:39.865219    6893 start.go:297] selected driver: qemu2
	I0910 11:20:39.865225    6893 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:20:39.865230    6893 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:39.867497    6893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:20:39.870251    6893 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:20:39.874290    6893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:39.874341    6893 cni.go:84] Creating CNI manager for ""
	I0910 11:20:39.874361    6893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:20:39.874365    6893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:20:39.874397    6893 start.go:340] cluster config:
	{Name:embed-certs-155000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:39.878050    6893 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:39.886218    6893 out.go:177] * Starting "embed-certs-155000" primary control-plane node in "embed-certs-155000" cluster
	I0910 11:20:39.890267    6893 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:20:39.890284    6893 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:20:39.890292    6893 cache.go:56] Caching tarball of preloaded images
	I0910 11:20:39.890367    6893 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:20:39.890373    6893 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:20:39.890444    6893 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/embed-certs-155000/config.json ...
	I0910 11:20:39.890460    6893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/embed-certs-155000/config.json: {Name:mkc4f6aad9f8d739ce169d715329767d81542c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:20:39.890683    6893 start.go:360] acquireMachinesLock for embed-certs-155000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:41.666027    6893 start.go:364] duration metric: took 1.775326792s to acquireMachinesLock for "embed-certs-155000"
	I0910 11:20:41.666204    6893 start.go:93] Provisioning new machine with config: &{Name:embed-certs-155000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:41.666437    6893 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:41.679926    6893 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:41.729967    6893 start.go:159] libmachine.API.Create for "embed-certs-155000" (driver="qemu2")
	I0910 11:20:41.730008    6893 client.go:168] LocalClient.Create starting
	I0910 11:20:41.730116    6893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:41.730178    6893 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:41.730201    6893 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:41.730268    6893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:41.730311    6893 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:41.730324    6893 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:41.730960    6893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:41.909635    6893 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:42.001317    6893 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:42.001326    6893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:42.001535    6893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:20:42.011586    6893 main.go:141] libmachine: STDOUT: 
	I0910 11:20:42.011612    6893 main.go:141] libmachine: STDERR: 
	I0910 11:20:42.011676    6893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2 +20000M
	I0910 11:20:42.020906    6893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:42.020925    6893 main.go:141] libmachine: STDERR: 
	I0910 11:20:42.020945    6893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:20:42.020949    6893 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:42.020963    6893 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:42.020992    6893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:49:96:5f:99:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:20:42.022693    6893 main.go:141] libmachine: STDOUT: 
	I0910 11:20:42.022709    6893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:42.022730    6893 client.go:171] duration metric: took 292.723625ms to LocalClient.Create
	I0910 11:20:44.024825    6893 start.go:128] duration metric: took 2.358421625s to createHost
	I0910 11:20:44.024931    6893 start.go:83] releasing machines lock for "embed-certs-155000", held for 2.358895834s
	W0910 11:20:44.024985    6893 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:44.033240    6893 out.go:177] * Deleting "embed-certs-155000" in qemu2 ...
	W0910 11:20:44.063196    6893 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:44.063222    6893 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:49.063575    6893 start.go:360] acquireMachinesLock for embed-certs-155000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:49.064103    6893 start.go:364] duration metric: took 401.25µs to acquireMachinesLock for "embed-certs-155000"
	I0910 11:20:49.064246    6893 start.go:93] Provisioning new machine with config: &{Name:embed-certs-155000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:49.064562    6893 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:49.076352    6893 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:49.127491    6893 start.go:159] libmachine.API.Create for "embed-certs-155000" (driver="qemu2")
	I0910 11:20:49.127544    6893 client.go:168] LocalClient.Create starting
	I0910 11:20:49.127665    6893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:49.127718    6893 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:49.127732    6893 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:49.127795    6893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:49.127839    6893 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:49.127860    6893 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:49.128395    6893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:49.298895    6893 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:49.334985    6893 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:49.334990    6893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:49.335230    6893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:20:49.344471    6893 main.go:141] libmachine: STDOUT: 
	I0910 11:20:49.344492    6893 main.go:141] libmachine: STDERR: 
	I0910 11:20:49.344538    6893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2 +20000M
	I0910 11:20:49.352511    6893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:49.352527    6893 main.go:141] libmachine: STDERR: 
	I0910 11:20:49.352536    6893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:20:49.352549    6893 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:49.352557    6893 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:49.352586    6893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:f7:90:e0:95:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:20:49.354254    6893 main.go:141] libmachine: STDOUT: 
	I0910 11:20:49.354269    6893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:49.354282    6893 client.go:171] duration metric: took 226.737792ms to LocalClient.Create
	I0910 11:20:51.356376    6893 start.go:128] duration metric: took 2.291793667s to createHost
	I0910 11:20:51.356481    6893 start.go:83] releasing machines lock for "embed-certs-155000", held for 2.292409542s
	W0910 11:20:51.356763    6893 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:51.371309    6893 out.go:201] 
	W0910 11:20:51.380392    6893 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:51.380434    6893 out.go:270] * 
	* 
	W0910 11:20:51.383313    6893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:20:51.394271    6893 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-155000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (65.447875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-738000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-738000 create -f testdata/busybox.yaml: exit status 1 (30.941917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-738000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-738000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (33.0835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (32.201291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-738000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-738000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-738000 describe deploy/metrics-server -n kube-system: exit status 1 (27.474833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-738000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-738000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (30.571708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-738000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-738000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.767143042s)

                                                
                                                
-- stdout --
	* [no-preload-738000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-738000" primary control-plane node in "no-preload-738000" cluster
	* Restarting existing qemu2 VM for "no-preload-738000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-738000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:45.697656    6940 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:45.697796    6940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:45.697799    6940 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:45.697801    6940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:45.697938    6940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:45.698945    6940 out.go:352] Setting JSON to false
	I0910 11:20:45.715005    6940 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4809,"bootTime":1725987636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:45.715078    6940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:45.719234    6940 out.go:177] * [no-preload-738000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:45.726320    6940 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:45.726359    6940 notify.go:220] Checking for updates...
	I0910 11:20:45.733273    6940 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:45.736295    6940 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:45.739216    6940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:45.742299    6940 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:45.745297    6940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:45.747165    6940 config.go:182] Loaded profile config "no-preload-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:45.747425    6940 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:45.752243    6940 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:20:45.759095    6940 start.go:297] selected driver: qemu2
	I0910 11:20:45.759103    6940 start.go:901] validating driver "qemu2" against &{Name:no-preload-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:45.759174    6940 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:45.761436    6940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:45.761462    6940 cni.go:84] Creating CNI manager for ""
	I0910 11:20:45.761472    6940 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:20:45.761502    6940 start.go:340] cluster config:
	{Name:no-preload-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-738000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:45.765021    6940 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.772269    6940 out.go:177] * Starting "no-preload-738000" primary control-plane node in "no-preload-738000" cluster
	I0910 11:20:45.776234    6940 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:20:45.776296    6940 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/no-preload-738000/config.json ...
	I0910 11:20:45.776319    6940 cache.go:107] acquiring lock: {Name:mk06ce94e3b7e3ca8885184edeca4f7e5645ca7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776327    6940 cache.go:107] acquiring lock: {Name:mk18824a930e97209272e5cf2e5cc5380cc03b98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776330    6940 cache.go:107] acquiring lock: {Name:mke7024365ca08d407bef3b5b845f73ef105af20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776376    6940 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0910 11:20:45.776381    6940 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 62.875µs
	I0910 11:20:45.776389    6940 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0910 11:20:45.776392    6940 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0910 11:20:45.776395    6940 cache.go:107] acquiring lock: {Name:mkc3e627fb73e259c37d748d5e6df7162f5e3b43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776401    6940 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0910 11:20:45.776401    6940 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 76.416µs
	I0910 11:20:45.776411    6940 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0910 11:20:45.776406    6940 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 84.916µs
	I0910 11:20:45.776417    6940 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0910 11:20:45.776415    6940 cache.go:107] acquiring lock: {Name:mk971764c5b139461a306725ae5e6036e89bc73e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776424    6940 cache.go:107] acquiring lock: {Name:mkc72d636ab182fcf861e1e15e2606b32c64a9a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776491    6940 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0910 11:20:45.776496    6940 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 82.791µs
	I0910 11:20:45.776501    6940 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0910 11:20:45.776430    6940 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0910 11:20:45.776507    6940 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 113.125µs
	I0910 11:20:45.776511    6940 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0910 11:20:45.776431    6940 cache.go:107] acquiring lock: {Name:mk86d5bead7159d19bbcbbfcd8f52d4ea3bfb1ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776515    6940 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0910 11:20:45.776519    6940 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 95.25µs
	I0910 11:20:45.776524    6940 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0910 11:20:45.776545    6940 cache.go:115] /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0910 11:20:45.776545    6940 cache.go:107] acquiring lock: {Name:mk8d7ec58511be189d6e47e5a53f0f631eb45385 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:45.776549    6940 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 118.083µs
	I0910 11:20:45.776553    6940 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0910 11:20:45.776607    6940 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 11:20:45.776759    6940 start.go:360] acquireMachinesLock for no-preload-738000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:45.776799    6940 start.go:364] duration metric: took 31.667µs to acquireMachinesLock for "no-preload-738000"
	I0910 11:20:45.776809    6940 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:20:45.776813    6940 fix.go:54] fixHost starting: 
	I0910 11:20:45.776943    6940 fix.go:112] recreateIfNeeded on no-preload-738000: state=Stopped err=<nil>
	W0910 11:20:45.776950    6940 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:20:45.784225    6940 out.go:177] * Restarting existing qemu2 VM for "no-preload-738000" ...
	I0910 11:20:45.787263    6940 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:45.787315    6940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:87:c1:ee:68:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:45.787828    6940 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 11:20:45.789557    6940 main.go:141] libmachine: STDOUT: 
	I0910 11:20:45.789577    6940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:45.789604    6940 fix.go:56] duration metric: took 12.790167ms for fixHost
	I0910 11:20:45.789609    6940 start.go:83] releasing machines lock for "no-preload-738000", held for 12.804792ms
	W0910 11:20:45.789617    6940 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:45.789658    6940 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:45.789669    6940 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:46.667684    6940 cache.go:162] opening:  /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0910 11:20:50.789756    6940 start.go:360] acquireMachinesLock for no-preload-738000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:51.356611    6940 start.go:364] duration metric: took 566.768375ms to acquireMachinesLock for "no-preload-738000"
	I0910 11:20:51.356772    6940 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:20:51.356791    6940 fix.go:54] fixHost starting: 
	I0910 11:20:51.357455    6940 fix.go:112] recreateIfNeeded on no-preload-738000: state=Stopped err=<nil>
	W0910 11:20:51.357483    6940 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:20:51.371288    6940 out.go:177] * Restarting existing qemu2 VM for "no-preload-738000" ...
	I0910 11:20:51.376231    6940 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:51.376415    6940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:87:c1:ee:68:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/no-preload-738000/disk.qcow2
	I0910 11:20:51.386749    6940 main.go:141] libmachine: STDOUT: 
	I0910 11:20:51.386813    6940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:51.386904    6940 fix.go:56] duration metric: took 30.116416ms for fixHost
	I0910 11:20:51.386925    6940 start.go:83] releasing machines lock for "no-preload-738000", held for 30.278209ms
	W0910 11:20:51.387125    6940 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-738000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-738000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:51.402281    6940 out.go:201] 
	W0910 11:20:51.409286    6940 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:51.409324    6940 out.go:270] * 
	* 
	W0910 11:20:51.411813    6940 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:20:51.424235    6940 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-738000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (54.191333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-155000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-155000 create -f testdata/busybox.yaml: exit status 1 (31.429042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-155000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-155000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (29.699166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (34.609208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-738000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (34.799917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-738000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-738000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-738000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.841458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-738000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-738000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (30.876625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-155000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-155000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-155000 describe deploy/metrics-server -n kube-system: exit status 1 (28.483416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-155000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-155000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (30.593042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-738000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (30.656167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-738000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-738000 --alsologtostderr -v=1: exit status 83 (48.868125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-738000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-738000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:51.701587    6977 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:51.701736    6977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:51.701741    6977 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:51.701744    6977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:51.701890    6977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:51.702113    6977 out.go:352] Setting JSON to false
	I0910 11:20:51.702121    6977 mustload.go:65] Loading cluster: no-preload-738000
	I0910 11:20:51.702345    6977 config.go:182] Loaded profile config "no-preload-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:51.707120    6977 out.go:177] * The control-plane node no-preload-738000 host is not running: state=Stopped
	I0910 11:20:51.714168    6977 out.go:177]   To start a cluster, run: "minikube start -p no-preload-738000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-738000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (29.606583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (29.16975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-738000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-258000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-258000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.892733459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-258000" primary control-plane node in "default-k8s-diff-port-258000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-258000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:52.112595    7009 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:52.112722    7009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:52.112725    7009 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:52.112727    7009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:52.112845    7009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:52.113938    7009 out.go:352] Setting JSON to false
	I0910 11:20:52.129860    7009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4816,"bootTime":1725987636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:52.129919    7009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:52.135263    7009 out.go:177] * [default-k8s-diff-port-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:52.143084    7009 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:52.143141    7009 notify.go:220] Checking for updates...
	I0910 11:20:52.150207    7009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:52.153089    7009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:52.156185    7009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:52.159160    7009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:52.160718    7009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:52.164570    7009 config.go:182] Loaded profile config "embed-certs-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:52.164633    7009 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:52.164682    7009 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:52.169159    7009 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:20:52.175211    7009 start.go:297] selected driver: qemu2
	I0910 11:20:52.175218    7009 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:20:52.175225    7009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:52.177449    7009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 11:20:52.181182    7009 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:20:52.182828    7009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:52.182859    7009 cni.go:84] Creating CNI manager for ""
	I0910 11:20:52.182871    7009 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:20:52.182878    7009 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:20:52.182908    7009 start.go:340] cluster config:
	{Name:default-k8s-diff-port-258000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-258000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:52.186600    7009 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:52.194268    7009 out.go:177] * Starting "default-k8s-diff-port-258000" primary control-plane node in "default-k8s-diff-port-258000" cluster
	I0910 11:20:52.198172    7009 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:20:52.198185    7009 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:20:52.198193    7009 cache.go:56] Caching tarball of preloaded images
	I0910 11:20:52.198247    7009 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:20:52.198252    7009 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:20:52.198307    7009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/default-k8s-diff-port-258000/config.json ...
	I0910 11:20:52.198318    7009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/default-k8s-diff-port-258000/config.json: {Name:mkd44199db915b972f199567669dc69ea8fd1fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:20:52.198543    7009 start.go:360] acquireMachinesLock for default-k8s-diff-port-258000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:52.198577    7009 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "default-k8s-diff-port-258000"
	I0910 11:20:52.198589    7009 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-258000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:52.198614    7009 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:52.206185    7009 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:52.223753    7009 start.go:159] libmachine.API.Create for "default-k8s-diff-port-258000" (driver="qemu2")
	I0910 11:20:52.223779    7009 client.go:168] LocalClient.Create starting
	I0910 11:20:52.223847    7009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:52.223881    7009 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:52.223891    7009 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:52.223926    7009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:52.223948    7009 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:52.223956    7009 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:52.224382    7009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:52.384819    7009 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:52.550687    7009 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:52.550693    7009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:52.550913    7009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:20:52.560536    7009 main.go:141] libmachine: STDOUT: 
	I0910 11:20:52.560559    7009 main.go:141] libmachine: STDERR: 
	I0910 11:20:52.560602    7009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2 +20000M
	I0910 11:20:52.568530    7009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:52.568550    7009 main.go:141] libmachine: STDERR: 
	I0910 11:20:52.568564    7009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:20:52.568568    7009 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:52.568578    7009 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:52.568611    7009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a3:6b:28:d1:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:20:52.570270    7009 main.go:141] libmachine: STDOUT: 
	I0910 11:20:52.570287    7009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:52.570306    7009 client.go:171] duration metric: took 346.5315ms to LocalClient.Create
	I0910 11:20:54.572506    7009 start.go:128] duration metric: took 2.373929792s to createHost
	I0910 11:20:54.572581    7009 start.go:83] releasing machines lock for "default-k8s-diff-port-258000", held for 2.374056167s
	W0910 11:20:54.572667    7009 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:54.590779    7009 out.go:177] * Deleting "default-k8s-diff-port-258000" in qemu2 ...
	W0910 11:20:54.622488    7009 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:54.622510    7009 start.go:729] Will try again in 5 seconds ...
	I0910 11:20:59.624572    7009 start.go:360] acquireMachinesLock for default-k8s-diff-port-258000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:59.625173    7009 start.go:364] duration metric: took 447.208µs to acquireMachinesLock for "default-k8s-diff-port-258000"
	I0910 11:20:59.625353    7009 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-258000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:20:59.625714    7009 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:20:59.628643    7009 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:20:59.676789    7009 start.go:159] libmachine.API.Create for "default-k8s-diff-port-258000" (driver="qemu2")
	I0910 11:20:59.676845    7009 client.go:168] LocalClient.Create starting
	I0910 11:20:59.676983    7009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:20:59.677046    7009 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:59.677067    7009 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:59.677126    7009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:20:59.677165    7009 main.go:141] libmachine: Decoding PEM data...
	I0910 11:20:59.677175    7009 main.go:141] libmachine: Parsing certificate...
	I0910 11:20:59.677832    7009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:20:59.849006    7009 main.go:141] libmachine: Creating SSH key...
	I0910 11:20:59.896702    7009 main.go:141] libmachine: Creating Disk image...
	I0910 11:20:59.896710    7009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:20:59.896918    7009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:20:59.906065    7009 main.go:141] libmachine: STDOUT: 
	I0910 11:20:59.906096    7009 main.go:141] libmachine: STDERR: 
	I0910 11:20:59.906143    7009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2 +20000M
	I0910 11:20:59.913992    7009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:20:59.914017    7009 main.go:141] libmachine: STDERR: 
	I0910 11:20:59.914033    7009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:20:59.914037    7009 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:20:59.914050    7009 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:59.914082    7009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b7:a2:d5:d2:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:20:59.915680    7009 main.go:141] libmachine: STDOUT: 
	I0910 11:20:59.915707    7009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:59.915718    7009 client.go:171] duration metric: took 238.875584ms to LocalClient.Create
	I0910 11:21:01.917914    7009 start.go:128] duration metric: took 2.292223208s to createHost
	I0910 11:21:01.918014    7009 start.go:83] releasing machines lock for "default-k8s-diff-port-258000", held for 2.292846833s
	W0910 11:21:01.918518    7009 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-258000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-258000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:01.928315    7009 out.go:201] 
	W0910 11:21:01.946317    7009 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:21:01.946346    7009 out.go:270] * 
	* 
	W0910 11:21:01.948840    7009 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:21:01.961139    7009 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-258000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (62.898209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-155000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-155000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.231895833s)

                                                
                                                
-- stdout --
	* [embed-certs-155000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-155000" primary control-plane node in "embed-certs-155000" cluster
	* Restarting existing qemu2 VM for "embed-certs-155000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-155000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:20:55.793953    7037 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:20:55.794086    7037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:55.794092    7037 out.go:358] Setting ErrFile to fd 2...
	I0910 11:20:55.794095    7037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:20:55.794220    7037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:20:55.795329    7037 out.go:352] Setting JSON to false
	I0910 11:20:55.811911    7037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4819,"bootTime":1725987636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:20:55.811985    7037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:20:55.816329    7037 out.go:177] * [embed-certs-155000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:20:55.823478    7037 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:20:55.823537    7037 notify.go:220] Checking for updates...
	I0910 11:20:55.830484    7037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:20:55.833476    7037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:20:55.836458    7037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:20:55.839407    7037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:20:55.842460    7037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:20:55.845642    7037 config.go:182] Loaded profile config "embed-certs-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:20:55.845914    7037 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:20:55.850453    7037 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:20:55.857333    7037 start.go:297] selected driver: qemu2
	I0910 11:20:55.857339    7037 start.go:901] validating driver "qemu2" against &{Name:embed-certs-155000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:55.857389    7037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:20:55.859903    7037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:20:55.859960    7037 cni.go:84] Creating CNI manager for ""
	I0910 11:20:55.859969    7037 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:20:55.859993    7037 start.go:340] cluster config:
	{Name:embed-certs-155000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-155000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:20:55.863846    7037 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:20:55.871270    7037 out.go:177] * Starting "embed-certs-155000" primary control-plane node in "embed-certs-155000" cluster
	I0910 11:20:55.875411    7037 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:20:55.875426    7037 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:20:55.875433    7037 cache.go:56] Caching tarball of preloaded images
	I0910 11:20:55.875494    7037 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:20:55.875499    7037 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:20:55.875570    7037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/embed-certs-155000/config.json ...
	I0910 11:20:55.876130    7037 start.go:360] acquireMachinesLock for embed-certs-155000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:20:55.876163    7037 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "embed-certs-155000"
	I0910 11:20:55.876172    7037 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:20:55.876176    7037 fix.go:54] fixHost starting: 
	I0910 11:20:55.876293    7037 fix.go:112] recreateIfNeeded on embed-certs-155000: state=Stopped err=<nil>
	W0910 11:20:55.876301    7037 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:20:55.878304    7037 out.go:177] * Restarting existing qemu2 VM for "embed-certs-155000" ...
	I0910 11:20:55.886419    7037 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:20:55.886465    7037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:f7:90:e0:95:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:20:55.888572    7037 main.go:141] libmachine: STDOUT: 
	I0910 11:20:55.888593    7037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:20:55.888622    7037 fix.go:56] duration metric: took 12.445209ms for fixHost
	I0910 11:20:55.888626    7037 start.go:83] releasing machines lock for "embed-certs-155000", held for 12.458542ms
	W0910 11:20:55.888634    7037 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:20:55.888666    7037 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:20:55.888671    7037 start.go:729] Will try again in 5 seconds ...
	I0910 11:21:00.890772    7037 start.go:360] acquireMachinesLock for embed-certs-155000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:21:01.918242    7037 start.go:364] duration metric: took 1.027373667s to acquireMachinesLock for "embed-certs-155000"
	I0910 11:21:01.918444    7037 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:21:01.918467    7037 fix.go:54] fixHost starting: 
	I0910 11:21:01.919254    7037 fix.go:112] recreateIfNeeded on embed-certs-155000: state=Stopped err=<nil>
	W0910 11:21:01.919281    7037 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:21:01.942199    7037 out.go:177] * Restarting existing qemu2 VM for "embed-certs-155000" ...
	I0910 11:21:01.949157    7037 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:21:01.949423    7037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:f7:90:e0:95:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/embed-certs-155000/disk.qcow2
	I0910 11:21:01.959075    7037 main.go:141] libmachine: STDOUT: 
	I0910 11:21:01.959141    7037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:21:01.959221    7037 fix.go:56] duration metric: took 40.756375ms for fixHost
	I0910 11:21:01.959240    7037 start.go:83] releasing machines lock for "embed-certs-155000", held for 40.947625ms
	W0910 11:21:01.959424    7037 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-155000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-155000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:01.968077    7037 out.go:201] 
	W0910 11:21:01.978241    7037 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:21:01.978270    7037 out.go:270] * 
	* 
	W0910 11:21:01.980441    7037 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:21:01.991219    7037 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-155000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (50.217958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-258000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-258000 create -f testdata/busybox.yaml: exit status 1 (31.770334ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-258000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-258000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (31.26375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (33.578875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-155000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (32.407542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-155000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-155000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-155000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.708416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-155000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-155000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (29.79ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-258000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-258000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-258000 describe deploy/metrics-server -n kube-system: exit status 1 (28.541125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-258000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-258000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (34.844292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-155000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (30.210125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-155000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-155000 --alsologtostderr -v=1: exit status 83 (50.138542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-155000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-155000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:21:02.255127    7074 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:21:02.255290    7074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:02.255294    7074 out.go:358] Setting ErrFile to fd 2...
	I0910 11:21:02.255297    7074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:02.255451    7074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:21:02.255703    7074 out.go:352] Setting JSON to false
	I0910 11:21:02.255712    7074 mustload.go:65] Loading cluster: embed-certs-155000
	I0910 11:21:02.255926    7074 config.go:182] Loaded profile config "embed-certs-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:21:02.259155    7074 out.go:177] * The control-plane node embed-certs-155000 host is not running: state=Stopped
	I0910 11:21:02.268143    7074 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-155000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-155000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (38.306083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (28.686042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-194000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-194000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.082860875s)

                                                
                                                
-- stdout --
	* [newest-cni-194000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-194000" primary control-plane node in "newest-cni-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:21:02.576891    7099 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:21:02.577005    7099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:02.577009    7099 out.go:358] Setting ErrFile to fd 2...
	I0910 11:21:02.577012    7099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:02.577131    7099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:21:02.578182    7099 out.go:352] Setting JSON to false
	I0910 11:21:02.594548    7099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4826,"bootTime":1725987636,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:21:02.594621    7099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:21:02.599134    7099 out.go:177] * [newest-cni-194000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:21:02.605132    7099 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:21:02.605172    7099 notify.go:220] Checking for updates...
	I0910 11:21:02.612112    7099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:21:02.615124    7099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:21:02.618114    7099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:21:02.621106    7099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:21:02.624123    7099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:21:02.627400    7099 config.go:182] Loaded profile config "default-k8s-diff-port-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:21:02.627464    7099 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:21:02.627519    7099 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:21:02.631120    7099 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 11:21:02.638046    7099 start.go:297] selected driver: qemu2
	I0910 11:21:02.638053    7099 start.go:901] validating driver "qemu2" against <nil>
	I0910 11:21:02.638060    7099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:21:02.640380    7099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0910 11:21:02.640402    7099 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0910 11:21:02.648114    7099 out.go:177] * Automatically selected the socket_vmnet network
	I0910 11:21:02.651135    7099 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 11:21:02.651149    7099 cni.go:84] Creating CNI manager for ""
	I0910 11:21:02.651156    7099 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:21:02.651160    7099 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 11:21:02.651196    7099 start.go:340] cluster config:
	{Name:newest-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:21:02.654956    7099 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:21:02.663124    7099 out.go:177] * Starting "newest-cni-194000" primary control-plane node in "newest-cni-194000" cluster
	I0910 11:21:02.667080    7099 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:21:02.667096    7099 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:21:02.667108    7099 cache.go:56] Caching tarball of preloaded images
	I0910 11:21:02.667167    7099 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:21:02.667173    7099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:21:02.667240    7099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/newest-cni-194000/config.json ...
	I0910 11:21:02.667251    7099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/newest-cni-194000/config.json: {Name:mk4160004691d495d6ca682375fc9ea7a20372fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 11:21:02.667472    7099 start.go:360] acquireMachinesLock for newest-cni-194000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:21:02.667504    7099 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "newest-cni-194000"
	I0910 11:21:02.667516    7099 start.go:93] Provisioning new machine with config: &{Name:newest-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:21:02.667548    7099 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:21:02.676106    7099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:21:02.693587    7099 start.go:159] libmachine.API.Create for "newest-cni-194000" (driver="qemu2")
	I0910 11:21:02.693615    7099 client.go:168] LocalClient.Create starting
	I0910 11:21:02.693670    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:21:02.693698    7099 main.go:141] libmachine: Decoding PEM data...
	I0910 11:21:02.693707    7099 main.go:141] libmachine: Parsing certificate...
	I0910 11:21:02.693743    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:21:02.693768    7099 main.go:141] libmachine: Decoding PEM data...
	I0910 11:21:02.693775    7099 main.go:141] libmachine: Parsing certificate...
	I0910 11:21:02.694133    7099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:21:02.854921    7099 main.go:141] libmachine: Creating SSH key...
	I0910 11:21:03.143419    7099 main.go:141] libmachine: Creating Disk image...
	I0910 11:21:03.143429    7099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:21:03.143740    7099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:03.153334    7099 main.go:141] libmachine: STDOUT: 
	I0910 11:21:03.153360    7099 main.go:141] libmachine: STDERR: 
	I0910 11:21:03.153406    7099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2 +20000M
	I0910 11:21:03.161452    7099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:21:03.161468    7099 main.go:141] libmachine: STDERR: 
	I0910 11:21:03.161486    7099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:03.161491    7099 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:21:03.161504    7099 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:21:03.161530    7099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:67:d3:f7:aa:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:03.163144    7099 main.go:141] libmachine: STDOUT: 
	I0910 11:21:03.163162    7099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:21:03.163180    7099 client.go:171] duration metric: took 469.575083ms to LocalClient.Create
	I0910 11:21:05.165393    7099 start.go:128] duration metric: took 2.497832375s to createHost
	I0910 11:21:05.165642    7099 start.go:83] releasing machines lock for "newest-cni-194000", held for 2.498193292s
	W0910 11:21:05.165759    7099 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:05.186103    7099 out.go:177] * Deleting "newest-cni-194000" in qemu2 ...
	W0910 11:21:05.217309    7099 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:05.217336    7099 start.go:729] Will try again in 5 seconds ...
	I0910 11:21:10.219408    7099 start.go:360] acquireMachinesLock for newest-cni-194000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:21:10.219848    7099 start.go:364] duration metric: took 332.417µs to acquireMachinesLock for "newest-cni-194000"
	I0910 11:21:10.219965    7099 start.go:93] Provisioning new machine with config: &{Name:newest-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 11:21:10.220284    7099 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 11:21:10.226047    7099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 11:21:10.275872    7099 start.go:159] libmachine.API.Create for "newest-cni-194000" (driver="qemu2")
	I0910 11:21:10.275924    7099 client.go:168] LocalClient.Create starting
	I0910 11:21:10.276037    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/ca.pem
	I0910 11:21:10.276099    7099 main.go:141] libmachine: Decoding PEM data...
	I0910 11:21:10.276118    7099 main.go:141] libmachine: Parsing certificate...
	I0910 11:21:10.276183    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19598-1276/.minikube/certs/cert.pem
	I0910 11:21:10.276227    7099 main.go:141] libmachine: Decoding PEM data...
	I0910 11:21:10.276241    7099 main.go:141] libmachine: Parsing certificate...
	I0910 11:21:10.276720    7099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso...
	I0910 11:21:10.449534    7099 main.go:141] libmachine: Creating SSH key...
	I0910 11:21:10.560978    7099 main.go:141] libmachine: Creating Disk image...
	I0910 11:21:10.560985    7099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 11:21:10.561262    7099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:10.570867    7099 main.go:141] libmachine: STDOUT: 
	I0910 11:21:10.570887    7099 main.go:141] libmachine: STDERR: 
	I0910 11:21:10.570930    7099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2 +20000M
	I0910 11:21:10.578829    7099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 11:21:10.578854    7099 main.go:141] libmachine: STDERR: 
	I0910 11:21:10.578868    7099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:10.578876    7099 main.go:141] libmachine: Starting QEMU VM...
	I0910 11:21:10.578885    7099 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:21:10.578924    7099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:75:d8:a2:61:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:10.580538    7099 main.go:141] libmachine: STDOUT: 
	I0910 11:21:10.580553    7099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:21:10.580568    7099 client.go:171] duration metric: took 304.646542ms to LocalClient.Create
	I0910 11:21:12.581591    7099 start.go:128] duration metric: took 2.361283625s to createHost
	I0910 11:21:12.581677    7099 start.go:83] releasing machines lock for "newest-cni-194000", held for 2.361869708s
	W0910 11:21:12.581999    7099 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:12.596487    7099 out.go:201] 
	W0910 11:21:12.603358    7099 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:21:12.603383    7099 out.go:270] * 
	* 
	W0910 11:21:12.605979    7099 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:21:12.620392    7099 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-194000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000: exit status 7 (68.029375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-258000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-258000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.868986541s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-258000" primary control-plane node in "default-k8s-diff-port-258000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-258000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-258000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:21:05.814845    7127 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:21:05.814976    7127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:05.814979    7127 out.go:358] Setting ErrFile to fd 2...
	I0910 11:21:05.814981    7127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:05.815117    7127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:21:05.816202    7127 out.go:352] Setting JSON to false
	I0910 11:21:05.832500    7127 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4829,"bootTime":1725987636,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:21:05.832568    7127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:21:05.836320    7127 out.go:177] * [default-k8s-diff-port-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:21:05.842370    7127 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:21:05.842404    7127 notify.go:220] Checking for updates...
	I0910 11:21:05.848487    7127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:21:05.851367    7127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:21:05.852929    7127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:21:05.856339    7127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:21:05.859359    7127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:21:05.862697    7127 config.go:182] Loaded profile config "default-k8s-diff-port-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:21:05.862948    7127 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:21:05.867315    7127 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:21:05.874312    7127 start.go:297] selected driver: qemu2
	I0910 11:21:05.874318    7127 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-258000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:21:05.874371    7127 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:21:05.876661    7127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 11:21:05.876707    7127 cni.go:84] Creating CNI manager for ""
	I0910 11:21:05.876715    7127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:21:05.876749    7127 start.go:340] cluster config:
	{Name:default-k8s-diff-port-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-258000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:21:05.880296    7127 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:21:05.888374    7127 out.go:177] * Starting "default-k8s-diff-port-258000" primary control-plane node in "default-k8s-diff-port-258000" cluster
	I0910 11:21:05.892308    7127 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:21:05.892322    7127 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:21:05.892330    7127 cache.go:56] Caching tarball of preloaded images
	I0910 11:21:05.892393    7127 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:21:05.892398    7127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:21:05.892457    7127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/default-k8s-diff-port-258000/config.json ...
	I0910 11:21:05.893013    7127 start.go:360] acquireMachinesLock for default-k8s-diff-port-258000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:21:05.893048    7127 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "default-k8s-diff-port-258000"
	I0910 11:21:05.893056    7127 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:21:05.893063    7127 fix.go:54] fixHost starting: 
	I0910 11:21:05.893177    7127 fix.go:112] recreateIfNeeded on default-k8s-diff-port-258000: state=Stopped err=<nil>
	W0910 11:21:05.893185    7127 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:21:05.897436    7127 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-258000" ...
	I0910 11:21:05.905317    7127 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:21:05.905355    7127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b7:a2:d5:d2:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:21:05.907458    7127 main.go:141] libmachine: STDOUT: 
	I0910 11:21:05.907478    7127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:21:05.907505    7127 fix.go:56] duration metric: took 14.442125ms for fixHost
	I0910 11:21:05.907510    7127 start.go:83] releasing machines lock for "default-k8s-diff-port-258000", held for 14.457459ms
	W0910 11:21:05.907517    7127 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:21:05.907549    7127 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:05.907556    7127 start.go:729] Will try again in 5 seconds ...
	I0910 11:21:10.909578    7127 start.go:360] acquireMachinesLock for default-k8s-diff-port-258000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:21:12.581862    7127 start.go:364] duration metric: took 1.672206333s to acquireMachinesLock for "default-k8s-diff-port-258000"
	I0910 11:21:12.582071    7127 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:21:12.582089    7127 fix.go:54] fixHost starting: 
	I0910 11:21:12.582859    7127 fix.go:112] recreateIfNeeded on default-k8s-diff-port-258000: state=Stopped err=<nil>
	W0910 11:21:12.582885    7127 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:21:12.596400    7127 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-258000" ...
	I0910 11:21:12.603318    7127 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:21:12.603590    7127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b7:a2:d5:d2:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/default-k8s-diff-port-258000/disk.qcow2
	I0910 11:21:12.611548    7127 main.go:141] libmachine: STDOUT: 
	I0910 11:21:12.611626    7127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:21:12.611728    7127 fix.go:56] duration metric: took 29.639916ms for fixHost
	I0910 11:21:12.611752    7127 start.go:83] releasing machines lock for "default-k8s-diff-port-258000", held for 29.8145ms
	W0910 11:21:12.611939    7127 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:12.624146    7127 out.go:201] 
	W0910 11:21:12.632530    7127 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:21:12.632561    7127 out.go:270] * 
	* 
	W0910 11:21:12.635015    7127 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:21:12.645319    7127 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-258000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (52.798292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-258000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (35.349959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-258000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-258000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-258000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.259083ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-258000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-258000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (33.173375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-258000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (29.051083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-258000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-258000 --alsologtostderr -v=1: exit status 83 (40.509167ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-258000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-258000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:21:12.898260    7158 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:21:12.898403    7158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:12.898406    7158 out.go:358] Setting ErrFile to fd 2...
	I0910 11:21:12.898408    7158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:12.898552    7158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:21:12.898778    7158 out.go:352] Setting JSON to false
	I0910 11:21:12.898785    7158 mustload.go:65] Loading cluster: default-k8s-diff-port-258000
	I0910 11:21:12.898980    7158 config.go:182] Loaded profile config "default-k8s-diff-port-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:21:12.903329    7158 out.go:177] * The control-plane node default-k8s-diff-port-258000 host is not running: state=Stopped
	I0910 11:21:12.907311    7158 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-258000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-258000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (29.4455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (29.163542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-194000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-194000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.18297025s)

                                                
                                                
-- stdout --
	* [newest-cni-194000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-194000" primary control-plane node in "newest-cni-194000" cluster
	* Restarting existing qemu2 VM for "newest-cni-194000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-194000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:21:16.351125    7193 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:21:16.351244    7193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:16.351247    7193 out.go:358] Setting ErrFile to fd 2...
	I0910 11:21:16.351250    7193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:16.351374    7193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:21:16.352370    7193 out.go:352] Setting JSON to false
	I0910 11:21:16.368438    7193 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4840,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 11:21:16.368614    7193 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 11:21:16.373548    7193 out.go:177] * [newest-cni-194000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 11:21:16.379567    7193 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 11:21:16.379617    7193 notify.go:220] Checking for updates...
	I0910 11:21:16.386557    7193 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 11:21:16.389416    7193 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 11:21:16.392512    7193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 11:21:16.395548    7193 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 11:21:16.397007    7193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 11:21:16.400876    7193 config.go:182] Loaded profile config "newest-cni-194000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:21:16.401145    7193 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 11:21:16.405495    7193 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 11:21:16.410518    7193 start.go:297] selected driver: qemu2
	I0910 11:21:16.410527    7193 start.go:901] validating driver "qemu2" against &{Name:newest-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:21:16.410593    7193 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 11:21:16.412800    7193 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 11:21:16.412827    7193 cni.go:84] Creating CNI manager for ""
	I0910 11:21:16.412836    7193 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 11:21:16.412882    7193 start.go:340] cluster config:
	{Name:newest-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-194000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 11:21:16.416299    7193 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 11:21:16.423562    7193 out.go:177] * Starting "newest-cni-194000" primary control-plane node in "newest-cni-194000" cluster
	I0910 11:21:16.427577    7193 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 11:21:16.427591    7193 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 11:21:16.427600    7193 cache.go:56] Caching tarball of preloaded images
	I0910 11:21:16.427658    7193 preload.go:172] Found /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 11:21:16.427663    7193 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 11:21:16.427715    7193 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/newest-cni-194000/config.json ...
	I0910 11:21:16.428272    7193 start.go:360] acquireMachinesLock for newest-cni-194000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:21:16.428305    7193 start.go:364] duration metric: took 27.209µs to acquireMachinesLock for "newest-cni-194000"
	I0910 11:21:16.428313    7193 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:21:16.428318    7193 fix.go:54] fixHost starting: 
	I0910 11:21:16.428432    7193 fix.go:112] recreateIfNeeded on newest-cni-194000: state=Stopped err=<nil>
	W0910 11:21:16.428440    7193 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:21:16.432437    7193 out.go:177] * Restarting existing qemu2 VM for "newest-cni-194000" ...
	I0910 11:21:16.440349    7193 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:21:16.440378    7193 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:75:d8:a2:61:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:16.442287    7193 main.go:141] libmachine: STDOUT: 
	I0910 11:21:16.442310    7193 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:21:16.442339    7193 fix.go:56] duration metric: took 14.021208ms for fixHost
	I0910 11:21:16.442343    7193 start.go:83] releasing machines lock for "newest-cni-194000", held for 14.034375ms
	W0910 11:21:16.442352    7193 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:21:16.442384    7193 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:16.442388    7193 start.go:729] Will try again in 5 seconds ...
	I0910 11:21:21.444450    7193 start.go:360] acquireMachinesLock for newest-cni-194000: {Name:mk7d82825f9df5a42d0956d3943a8155bac34abc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 11:21:21.444872    7193 start.go:364] duration metric: took 297.375µs to acquireMachinesLock for "newest-cni-194000"
	I0910 11:21:21.444966    7193 start.go:96] Skipping create...Using existing machine configuration
	I0910 11:21:21.444984    7193 fix.go:54] fixHost starting: 
	I0910 11:21:21.445725    7193 fix.go:112] recreateIfNeeded on newest-cni-194000: state=Stopped err=<nil>
	W0910 11:21:21.445750    7193 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 11:21:21.457411    7193 out.go:177] * Restarting existing qemu2 VM for "newest-cni-194000" ...
	I0910 11:21:21.461175    7193 qemu.go:418] Using hvf for hardware acceleration
	I0910 11:21:21.461386    7193 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:75:d8:a2:61:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19598-1276/.minikube/machines/newest-cni-194000/disk.qcow2
	I0910 11:21:21.470280    7193 main.go:141] libmachine: STDOUT: 
	I0910 11:21:21.470362    7193 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 11:21:21.470519    7193 fix.go:56] duration metric: took 25.530166ms for fixHost
	I0910 11:21:21.470539    7193 start.go:83] releasing machines lock for "newest-cni-194000", held for 25.646125ms
	W0910 11:21:21.470775    7193 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-194000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-194000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 11:21:21.478021    7193 out.go:201] 
	W0910 11:21:21.482260    7193 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 11:21:21.482289    7193 out.go:270] * 
	* 
	W0910 11:21:21.484870    7193 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 11:21:21.492189    7193 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-194000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000: exit status 7 (68.560792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-194000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000: exit status 7 (29.874875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-194000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-194000 --alsologtostderr -v=1: exit status 83 (42.8065ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-194000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-194000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 11:21:21.675101    7207 out.go:345] Setting OutFile to fd 1 ...
	I0910 11:21:21.675259    7207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:21.675262    7207 out.go:358] Setting ErrFile to fd 2...
	I0910 11:21:21.675264    7207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 11:21:21.675408    7207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 11:21:21.675636    7207 out.go:352] Setting JSON to false
	I0910 11:21:21.675643    7207 mustload.go:65] Loading cluster: newest-cni-194000
	I0910 11:21:21.675847    7207 config.go:182] Loaded profile config "newest-cni-194000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 11:21:21.680288    7207 out.go:177] * The control-plane node newest-cni-194000 host is not running: state=Stopped
	I0910 11:21:21.684341    7207 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-194000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-194000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000: exit status 7 (29.825292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-194000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000: exit status 7 (30.611375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 8.51
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 201.86
29 TestAddons/serial/Volcano 37.18
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 19.5
35 TestAddons/parallel/InspektorGadget 10.39
36 TestAddons/parallel/MetricsServer 5.29
39 TestAddons/parallel/CSI 64.43
40 TestAddons/parallel/Headlamp 14.61
41 TestAddons/parallel/CloudSpanner 5.19
42 TestAddons/parallel/LocalPath 42.98
43 TestAddons/parallel/NvidiaDevicePlugin 6.14
44 TestAddons/parallel/Yakd 10.3
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 10.86
56 TestErrorSpam/setup 33.68
57 TestErrorSpam/start 0.36
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.7
60 TestErrorSpam/unpause 0.57
61 TestErrorSpam/stop 55.26
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 72.3
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.96
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.83
73 TestFunctional/serial/CacheCmd/cache/add_local 1.64
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.08
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.84
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.03
81 TestFunctional/serial/ExtraConfig 36.17
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.63
84 TestFunctional/serial/LogsFileCmd 0.6
85 TestFunctional/serial/InvalidService 4.17
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 10.3
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 23.95
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.41
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.4
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
111 TestFunctional/parallel/License 0.32
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.06
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.32
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.09
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
130 TestFunctional/parallel/ProfileCmd/profile_list 0.12
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 6.35
133 TestFunctional/parallel/MountCmd/specific-port 1.22
134 TestFunctional/parallel/MountCmd/VerifyCleanup 0.88
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.15
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.73
142 TestFunctional/parallel/ImageCommands/Setup 1.74
143 TestFunctional/parallel/DockerEnv/bash 0.28
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 203.39
161 TestMultiControlPlane/serial/DeployApp 5.94
162 TestMultiControlPlane/serial/PingHostFromPods 0.73
163 TestMultiControlPlane/serial/AddWorkerNode 61.14
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
166 TestMultiControlPlane/serial/CopyFile 4.38
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 27.73
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.38
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 0.89
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.27
277 TestNoKubernetes/serial/Stop 3.5
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
294 TestStartStop/group/old-k8s-version/serial/Stop 3.39
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
307 TestStartStop/group/no-preload/serial/Stop 3.55
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
316 TestStartStop/group/embed-certs/serial/Stop 3.93
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.39
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
336 TestStartStop/group/newest-cni/serial/Stop 3.43
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-581000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-581000: exit status 85 (93.477167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-581000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |          |
	|         | -p download-only-581000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 10:28:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 10:28:20.652600    1797 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:28:20.652745    1797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:20.652748    1797 out.go:358] Setting ErrFile to fd 2...
	I0910 10:28:20.652750    1797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:20.652880    1797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	W0910 10:28:20.652971    1797 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19598-1276/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19598-1276/.minikube/config/config.json: no such file or directory
	I0910 10:28:20.654255    1797 out.go:352] Setting JSON to true
	I0910 10:28:20.671245    1797 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1664,"bootTime":1725987636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 10:28:20.671321    1797 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 10:28:20.677438    1797 out.go:97] [download-only-581000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 10:28:20.677569    1797 notify.go:220] Checking for updates...
	W0910 10:28:20.677578    1797 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 10:28:20.681277    1797 out.go:169] MINIKUBE_LOCATION=19598
	I0910 10:28:20.684303    1797 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:28:20.689368    1797 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 10:28:20.692302    1797 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 10:28:20.695325    1797 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	W0910 10:28:20.701323    1797 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 10:28:20.701516    1797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 10:28:20.706354    1797 out.go:97] Using the qemu2 driver based on user configuration
	I0910 10:28:20.706377    1797 start.go:297] selected driver: qemu2
	I0910 10:28:20.706381    1797 start.go:901] validating driver "qemu2" against <nil>
	I0910 10:28:20.706461    1797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 10:28:20.709376    1797 out.go:169] Automatically selected the socket_vmnet network
	I0910 10:28:20.715028    1797 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0910 10:28:20.715118    1797 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 10:28:20.715210    1797 cni.go:84] Creating CNI manager for ""
	I0910 10:28:20.715226    1797 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 10:28:20.715273    1797 start.go:340] cluster config:
	{Name:download-only-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:28:20.720452    1797 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 10:28:20.725309    1797 out.go:97] Downloading VM boot image ...
	I0910 10:28:20.725322    1797 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/iso/arm64/minikube-v1.34.0-1725912912-19598-arm64.iso
	I0910 10:28:28.936338    1797 out.go:97] Starting "download-only-581000" primary control-plane node in "download-only-581000" cluster
	I0910 10:28:28.936358    1797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 10:28:28.996348    1797 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 10:28:28.996356    1797 cache.go:56] Caching tarball of preloaded images
	I0910 10:28:28.996507    1797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 10:28:29.001634    1797 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0910 10:28:29.001641    1797 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 10:28:29.077377    1797 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 10:28:34.604768    1797 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 10:28:34.604933    1797 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 10:28:35.300660    1797 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0910 10:28:35.300865    1797 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/download-only-581000/config.json ...
	I0910 10:28:35.300882    1797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/download-only-581000/config.json: {Name:mk0d9555d9ba472361af6b5a19e01c658b692478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 10:28:35.301105    1797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 10:28:35.301311    1797 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0910 10:28:35.885001    1797 out.go:193] 
	W0910 10:28:35.890735    1797 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19598-1276/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020 0x108d9c020] Decompressors:map[bz2:0x140007fb700 gz:0x140007fb708 tar:0x140007fb6b0 tar.bz2:0x140007fb6c0 tar.gz:0x140007fb6d0 tar.xz:0x140007fb6e0 tar.zst:0x140007fb6f0 tbz2:0x140007fb6c0 tgz:0x140007fb6d0 txz:0x140007fb6e0 tzst:0x140007fb6f0 xz:0x140007fb710 zip:0x140007fb720 zst:0x140007fb718] Getters:map[file:0x14001404650 http:0x140004f8230 https:0x140004f8280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0910 10:28:35.890768    1797 out_reason.go:110] 
	W0910 10:28:35.901860    1797 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 10:28:35.906807    1797 out.go:193] 
	
	
	* The control-plane node download-only-581000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-581000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-581000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (8.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-266000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-266000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (8.5099215s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (8.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-266000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-266000: exit status 85 (78.366ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-581000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |                     |
	|         | -p download-only-581000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| delete  | -p download-only-581000        | download-only-581000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT | 10 Sep 24 10:28 PDT |
	| start   | -o=json --download-only        | download-only-266000 | jenkins | v1.34.0 | 10 Sep 24 10:28 PDT |                     |
	|         | -p download-only-266000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 10:28:36
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 10:28:36.322391    1822 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:28:36.322508    1822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:36.322511    1822 out.go:358] Setting ErrFile to fd 2...
	I0910 10:28:36.322514    1822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:28:36.322629    1822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:28:36.323670    1822 out.go:352] Setting JSON to true
	I0910 10:28:36.340166    1822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1680,"bootTime":1725987636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 10:28:36.340241    1822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 10:28:36.344371    1822 out.go:97] [download-only-266000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 10:28:36.344458    1822 notify.go:220] Checking for updates...
	I0910 10:28:36.348326    1822 out.go:169] MINIKUBE_LOCATION=19598
	I0910 10:28:36.351404    1822 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:28:36.355344    1822 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 10:28:36.358332    1822 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 10:28:36.361374    1822 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	W0910 10:28:36.367260    1822 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 10:28:36.367411    1822 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 10:28:36.370319    1822 out.go:97] Using the qemu2 driver based on user configuration
	I0910 10:28:36.370327    1822 start.go:297] selected driver: qemu2
	I0910 10:28:36.370331    1822 start.go:901] validating driver "qemu2" against <nil>
	I0910 10:28:36.370376    1822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 10:28:36.373305    1822 out.go:169] Automatically selected the socket_vmnet network
	I0910 10:28:36.378527    1822 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0910 10:28:36.378614    1822 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 10:28:36.378634    1822 cni.go:84] Creating CNI manager for ""
	I0910 10:28:36.378640    1822 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 10:28:36.378645    1822 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 10:28:36.378690    1822 start.go:340] cluster config:
	{Name:download-only-266000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:28:36.381999    1822 iso.go:125] acquiring lock: {Name:mkdf05e9eafdd4f958013b37202d71f89648c5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 10:28:36.385357    1822 out.go:97] Starting "download-only-266000" primary control-plane node in "download-only-266000" cluster
	I0910 10:28:36.385367    1822 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 10:28:36.441515    1822 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 10:28:36.441527    1822 cache.go:56] Caching tarball of preloaded images
	I0910 10:28:36.441670    1822 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 10:28:36.445791    1822 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0910 10:28:36.445798    1822 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 10:28:36.519255    1822 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19598-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-266000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-266000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-266000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-025000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-025000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-592000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-592000: exit status 85 (62.057125ms)

                                                
                                                
-- stdout --
	* Profile "addons-592000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-592000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-592000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-592000: exit status 85 (58.18225ms)

                                                
                                                
-- stdout --
	* Profile "addons-592000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-592000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (201.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-592000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-592000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m21.864842458s)
--- PASS: TestAddons/Setup (201.86s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.18s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.802125ms
addons_test.go:905: volcano-admission stabilized in 8.011625ms
addons_test.go:913: volcano-controller stabilized in 8.132875ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-h9k69" [214831e3-6d49-44cf-90da-53682e33dd76] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004571875s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-nvbbw" [902822f6-d6aa-4e05-afb0-07f9ca7a3530] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004133791s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-44cc4" [5e8ce63e-cbe4-43ac-97a4-cff01dbe69c2] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.008910041s
addons_test.go:932: (dbg) Run:  kubectl --context addons-592000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-592000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-592000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e8550b1d-68ba-4b67-baea-b96867b747bf] Pending
helpers_test.go:344: "test-job-nginx-0" [e8550b1d-68ba-4b67-baea-b96867b747bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e8550b1d-68ba-4b67-baea-b96867b747bf] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004916458s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-592000 addons disable volcano --alsologtostderr -v=1: (9.918076125s)
--- PASS: TestAddons/serial/Volcano (37.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-592000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-592000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-592000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-592000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-592000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3736ce08-4657-41a8-832f-2ed5bb054da7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3736ce08-4657-41a8-832f-2ed5bb054da7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.011913375s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-592000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-592000 addons disable ingress --alsologtostderr -v=1: (7.229665667s)
--- PASS: TestAddons/parallel/Ingress (19.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vbxln" [67c310ce-8e3e-4ac8-b287-abcb1c652c82] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003995375s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-592000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-592000: (5.388883917s)
--- PASS: TestAddons/parallel/InspektorGadget (10.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.297709ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-sb6ns" [6ef6de4d-79f9-4779-971b-4671e55ffe5a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010775833s
addons_test.go:417: (dbg) Run:  kubectl --context addons-592000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.51175ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-592000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-592000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4d53eeda-352e-43cb-aac5-854eca43565e] Pending
helpers_test.go:344: "task-pv-pod" [4d53eeda-352e-43cb-aac5-854eca43565e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4d53eeda-352e-43cb-aac5-854eca43565e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.0072725s
addons_test.go:590: (dbg) Run:  kubectl --context addons-592000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-592000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-592000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-592000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-592000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-592000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-592000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [462fbb58-f0b5-4b12-a3b4-b81c70d7eb95] Pending
helpers_test.go:344: "task-pv-pod-restore" [462fbb58-f0b5-4b12-a3b4-b81c70d7eb95] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [462fbb58-f0b5-4b12-a3b4-b81c70d7eb95] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.009809125s
addons_test.go:632: (dbg) Run:  kubectl --context addons-592000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-592000 delete pod task-pv-pod-restore: (1.249157125s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-592000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-592000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-592000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.149218333s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-592000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-lfnfg" [bb87d5d5-0e04-429f-b0fe-c45dc826abf7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-lfnfg" [bb87d5d5-0e04-429f-b0fe-c45dc826abf7] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.005783625s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-592000 addons disable headlamp --alsologtostderr -v=1: (5.264565541s)
--- PASS: TestAddons/parallel/Headlamp (14.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-85njj" [53446092-540b-452e-a4ba-9d39abefaa3d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0079195s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-592000
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (42.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-592000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-592000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bde96b96-b4be-474f-9e55-c57d204874e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bde96b96-b4be-474f-9e55-c57d204874e0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bde96b96-b4be-474f-9e55-c57d204874e0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.009788583s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-592000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 ssh "cat /opt/local-path-provisioner/pvc-7f2dc314-03ea-444c-b24c-a633ac8fa12e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-592000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-592000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-592000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.446598208s)
--- PASS: TestAddons/parallel/LocalPath (42.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pzndx" [0b9dc429-c9ee-40ac-82cb-a97095b45450] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003532209s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-592000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ln7xx" [03ecabc3-e675-495a-95c4-1915bed6ab32] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010425s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-592000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-592000 addons disable yakd --alsologtostderr -v=1: (5.288175958s)
--- PASS: TestAddons/parallel/Yakd (10.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-592000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-592000: (12.206303625s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-592000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-592000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-592000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.86s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.86s)

                                                
                                    
x
+
TestErrorSpam/setup (33.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-522000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-522000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 --driver=qemu2 : (33.6749225s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (33.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 unpause
--- PASS: TestErrorSpam/unpause (0.57s)

                                                
                                    
x
+
TestErrorSpam/stop (55.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 stop: (3.197183333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 stop: (26.032067208s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-522000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-522000 stop: (26.02900875s)
--- PASS: TestErrorSpam/stop (55.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19598-1276/.minikube/files/etc/test/nested/copy/1795/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (72.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-475000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-475000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m12.299074709s)
--- PASS: TestFunctional/serial/StartWithProxy (72.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-475000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-475000 --alsologtostderr -v=8: (38.955087625s)
functional_test.go:663: soft start took 38.955664s for "functional-475000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-475000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-475000 cache add registry.k8s.io/pause:3.1: (1.791755917s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-475000 cache add registry.k8s.io/pause:3.3: (1.763291083s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-475000 cache add registry.k8s.io/pause:latest: (1.275704416s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2154111649/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cache add minikube-local-cache-test:functional-475000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-475000 cache add minikube-local-cache-test:functional-475000: (1.326814666s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cache delete minikube-local-cache-test:functional-475000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-475000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.198166ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 kubectl -- --context functional-475000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.84s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-475000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-475000 get pods: (1.03341625s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-475000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0910 10:47:07.526475    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:07.536456    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:07.549943    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:07.573431    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:07.615724    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:07.697441    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:07.861016    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:08.184527    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:08.828392    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:10.112156    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:47:12.675687    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-475000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.173074375s)
functional_test.go:761: restart took 36.173171333s for "functional-475000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-475000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1706688404/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-475000 apply -f testdata/invalidsvc.yaml
E0910 10:47:17.799260    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-475000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-475000: exit status 115 (123.081ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31540 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-475000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 config get cpus: exit status 14 (32.38875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 config get cpus: exit status 14 (30.702084ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-475000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-475000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2961: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-475000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-475000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.729458ms)

                                                
                                                
-- stdout --
	* [functional-475000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 10:48:00.817012    2948 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:48:00.817130    2948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:48:00.817133    2948 out.go:358] Setting ErrFile to fd 2...
	I0910 10:48:00.817135    2948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:48:00.817273    2948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:48:00.818368    2948 out.go:352] Setting JSON to false
	I0910 10:48:00.835242    2948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2844,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 10:48:00.835334    2948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 10:48:00.840327    2948 out.go:177] * [functional-475000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0910 10:48:00.847254    2948 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 10:48:00.847315    2948 notify.go:220] Checking for updates...
	I0910 10:48:00.854340    2948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:48:00.855753    2948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 10:48:00.859268    2948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 10:48:00.862330    2948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 10:48:00.865386    2948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 10:48:00.868676    2948 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:48:00.868915    2948 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 10:48:00.873304    2948 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 10:48:00.880259    2948 start.go:297] selected driver: qemu2
	I0910 10:48:00.880265    2948 start.go:901] validating driver "qemu2" against &{Name:functional-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:48:00.880314    2948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 10:48:00.886320    2948 out.go:201] 
	W0910 10:48:00.890301    2948 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0910 10:48:00.894299    2948 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-475000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-475000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-475000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.687042ms)

                                                
                                                
-- stdout --
	* [functional-475000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 10:48:00.696256    2944 out.go:345] Setting OutFile to fd 1 ...
	I0910 10:48:00.696356    2944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:48:00.696359    2944 out.go:358] Setting ErrFile to fd 2...
	I0910 10:48:00.696361    2944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 10:48:00.696488    2944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
	I0910 10:48:00.697994    2944 out.go:352] Setting JSON to false
	I0910 10:48:00.716200    2944 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2844,"bootTime":1725987636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0910 10:48:00.716307    2944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0910 10:48:00.722349    2944 out.go:177] * [functional-475000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0910 10:48:00.731420    2944 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 10:48:00.731501    2944 notify.go:220] Checking for updates...
	I0910 10:48:00.739276    2944 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	I0910 10:48:00.742321    2944 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 10:48:00.745387    2944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 10:48:00.748311    2944 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	I0910 10:48:00.751351    2944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 10:48:00.754664    2944 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 10:48:00.754945    2944 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 10:48:00.759233    2944 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0910 10:48:00.766343    2944 start.go:297] selected driver: qemu2
	I0910 10:48:00.766351    2944 start.go:901] validating driver "qemu2" against &{Name:functional-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 10:48:00.766412    2944 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 10:48:00.772312    2944 out.go:201] 
	W0910 10:48:00.776359    2944 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0910 10:48:00.779298    2944 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [857cb5fe-59ed-4f70-8d7d-093b99b3d324] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01219575s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-475000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-475000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-475000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-475000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [42934c1d-68a1-4f22-ba91-2354ab847dfd] Pending
helpers_test.go:344: "sp-pod" [42934c1d-68a1-4f22-ba91-2354ab847dfd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0910 10:47:28.042470    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [42934c1d-68a1-4f22-ba91-2354ab847dfd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005855917s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-475000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-475000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-475000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0563e9b1-bdef-4024-85fa-e832a07cf2b3] Pending
helpers_test.go:344: "sp-pod" [0563e9b1-bdef-4024-85fa-e832a07cf2b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0563e9b1-bdef-4024-85fa-e832a07cf2b3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011192291s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-475000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh -n functional-475000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cp functional-475000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4154630811/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh -n functional-475000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh -n functional-475000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1795/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo cat /etc/test/nested/copy/1795/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1795.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo cat /etc/ssl/certs/1795.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1795.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo cat /usr/share/ca-certificates/1795.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo cat /etc/ssl/certs/17952.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo cat /usr/share/ca-certificates/17952.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-475000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 ssh "sudo systemctl is-active crio": exit status 1 (65.041125ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-475000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-475000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-475000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2785: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-475000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-475000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-475000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [57a379a1-5852-4ccf-908c-8a0b1815f0eb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [57a379a1-5852-4ccf-908c-8a0b1815f0eb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003185167s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-475000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.77.60 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-475000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-475000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-475000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-9prtd" [aba5f1eb-b400-4ffb-ba7c-42e959e1e05b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-9prtd" [aba5f1eb-b400-4ffb-ba7c-42e959e1e05b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009697458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 service list -o json
functional_test.go:1494: Took "298.0655ms" to run "out/minikube-darwin-arm64 -p functional-475000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31089
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31089
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "89.758833ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.878333ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "81.1255ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.583416ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1927916988/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725990471988822000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1927916988/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725990471988822000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1927916988/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725990471988822000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1927916988/001/test-1725990471988822000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.784708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 10 17:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 10 17:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 10 17:47 test-1725990471988822000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh cat /mount-9p/test-1725990471988822000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-475000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde] Pending
helpers_test.go:344: "busybox-mount" [32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [32c7d11c-4deb-40e9-a7b8-e5b1abcd5cde] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004044333s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-475000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1927916988/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2407826175/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.553625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2407826175/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 ssh "sudo umount -f /mount-9p": exit status 1 (60.517333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-475000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2407826175/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-475000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-475000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2300655594/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-475000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-475000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-475000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-475000 image ls --format short --alsologtostderr:
I0910 10:48:16.178946    3118 out.go:345] Setting OutFile to fd 1 ...
I0910 10:48:16.179092    3118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.179095    3118 out.go:358] Setting ErrFile to fd 2...
I0910 10:48:16.179098    3118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.179243    3118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
I0910 10:48:16.179711    3118 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.179777    3118 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.180878    3118 ssh_runner.go:195] Run: systemctl --version
I0910 10:48:16.180885    3118 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/functional-475000/id_rsa Username:docker}
I0910 10:48:16.202936    3118 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-475000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-475000 | 2859cf0956283 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-475000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-475000 image ls --format table --alsologtostderr:
I0910 10:48:16.325970    3127 out.go:345] Setting OutFile to fd 1 ...
I0910 10:48:16.326107    3127 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.326110    3127 out.go:358] Setting ErrFile to fd 2...
I0910 10:48:16.326113    3127 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.326227    3127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
I0910 10:48:16.326617    3127 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.326686    3127 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.327453    3127 ssh_runner.go:195] Run: systemctl --version
I0910 10:48:16.327462    3127 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/functional-475000/id_rsa Username:docker}
I0910 10:48:16.351889    3127 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-475000 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"2859cf0956283b9193d503184a2d72ca0cab9bb154f32ddf5e3c74fa7ed04569","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-475000"],"size":"30"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"i
d":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68
cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-475000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"si
ze":"57400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-475000 image ls --format json --alsologtostderr:
I0910 10:48:16.256026    3123 out.go:345] Setting OutFile to fd 1 ...
I0910 10:48:16.256184    3123 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.256188    3123 out.go:358] Setting ErrFile to fd 2...
I0910 10:48:16.256190    3123 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.256325    3123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
I0910 10:48:16.256781    3123 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.256843    3123 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.257567    3123 ssh_runner.go:195] Run: systemctl --version
I0910 10:48:16.257581    3123 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/functional-475000/id_rsa Username:docker}
I0910 10:48:16.280054    3123 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-475000 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2859cf0956283b9193d503184a2d72ca0cab9bb154f32ddf5e3c74fa7ed04569
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-475000
size: "30"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-475000
size: "4780000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-475000 image ls --format yaml --alsologtostderr:
I0910 10:48:16.178897    3117 out.go:345] Setting OutFile to fd 1 ...
I0910 10:48:16.179068    3117 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.179073    3117 out.go:358] Setting ErrFile to fd 2...
I0910 10:48:16.179075    3117 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.179222    3117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
I0910 10:48:16.179650    3117 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.179714    3117 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.180509    3117 ssh_runner.go:195] Run: systemctl --version
I0910 10:48:16.180521    3117 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/functional-475000/id_rsa Username:docker}
I0910 10:48:16.203483    3117 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-475000 ssh pgrep buildkitd: exit status 1 (58.859666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image build -t localhost/my-image:functional-475000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-475000 image build -t localhost/my-image:functional-475000 testdata/build --alsologtostderr: (2.606461625s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-475000 image build -t localhost/my-image:functional-475000 testdata/build --alsologtostderr:
I0910 10:48:16.313351    3126 out.go:345] Setting OutFile to fd 1 ...
I0910 10:48:16.313634    3126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.313638    3126 out.go:358] Setting ErrFile to fd 2...
I0910 10:48:16.313640    3126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 10:48:16.313806    3126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19598-1276/.minikube/bin
I0910 10:48:16.314252    3126 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.314951    3126 config.go:182] Loaded profile config "functional-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 10:48:16.315809    3126 ssh_runner.go:195] Run: systemctl --version
I0910 10:48:16.315817    3126 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19598-1276/.minikube/machines/functional-475000/id_rsa Username:docker}
I0910 10:48:16.338256    3126 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2695704662.tar
I0910 10:48:16.338324    3126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0910 10:48:16.342004    3126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2695704662.tar
I0910 10:48:16.343522    3126 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2695704662.tar: stat -c "%s %y" /var/lib/minikube/build/build.2695704662.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2695704662.tar': No such file or directory
I0910 10:48:16.343535    3126 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2695704662.tar --> /var/lib/minikube/build/build.2695704662.tar (3072 bytes)
I0910 10:48:16.352077    3126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2695704662
I0910 10:48:16.355761    3126 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2695704662 -xf /var/lib/minikube/build/build.2695704662.tar
I0910 10:48:16.361717    3126 docker.go:360] Building image: /var/lib/minikube/build/build.2695704662
I0910 10:48:16.361763    3126 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-475000 /var/lib/minikube/build/build.2695704662
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.8s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:125762c913062eb65dfa50206f3e2d65c39237fbf7e2f45f16a4ccad8747b7dc done
#8 naming to localhost/my-image:functional-475000 done
#8 DONE 0.0s
I0910 10:48:18.871999    3126 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-475000 /var/lib/minikube/build/build.2695704662: (2.510292125s)
I0910 10:48:18.872098    3126 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2695704662
I0910 10:48:18.877745    3126 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2695704662.tar
I0910 10:48:18.881062    3126 build_images.go:217] Built localhost/my-image:functional-475000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2695704662.tar
I0910 10:48:18.881076    3126 build_images.go:133] succeeded building to: functional-475000
I0910 10:48:18.881079    3126 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.718260709s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-475000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-475000 docker-env) && out/minikube-darwin-arm64 status -p functional-475000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-475000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image load --daemon kicbase/echo-server:functional-475000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image load --daemon kicbase/echo-server:functional-475000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-475000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image load --daemon kicbase/echo-server:functional-475000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image save kicbase/echo-server:functional-475000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image rm kicbase/echo-server:functional-475000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-475000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-475000 image save --daemon kicbase/echo-server:functional-475000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-475000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-475000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-475000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-475000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-080000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0910 10:48:29.488278    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:49:51.420933    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-080000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m23.195247209s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-080000 -- rollout status deployment/busybox: (4.509712125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-gpgqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-lx4l8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-r69hv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-gpgqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-lx4l8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-r69hv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-gpgqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-lx4l8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-r69hv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-gpgqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-gpgqv -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-lx4l8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-lx4l8 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-r69hv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-080000 -- exec busybox-7dff88458-r69hv -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-080000 -v=7 --alsologtostderr
E0910 10:52:07.528946    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:20.679593    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:20.687213    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:20.700239    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:20.721850    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:20.764160    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:20.847324    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:21.010688    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:21.332661    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:21.976141    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:23.259676    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:25.821502    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:30.943270    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:35.261027    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/addons-592000/client.crt: no such file or directory" logger="UnhandledError"
E0910 10:52:41.186535    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-080000 -v=7 --alsologtostderr: (1m0.907448417s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-080000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp testdata/cp-test.txt ha-080000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1413741474/001/cp-test_ha-080000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000:/home/docker/cp-test.txt ha-080000-m02:/home/docker/cp-test_ha-080000_ha-080000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test_ha-080000_ha-080000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000:/home/docker/cp-test.txt ha-080000-m03:/home/docker/cp-test_ha-080000_ha-080000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test_ha-080000_ha-080000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000:/home/docker/cp-test.txt ha-080000-m04:/home/docker/cp-test_ha-080000_ha-080000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test_ha-080000_ha-080000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp testdata/cp-test.txt ha-080000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1413741474/001/cp-test_ha-080000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m02:/home/docker/cp-test.txt ha-080000:/home/docker/cp-test_ha-080000-m02_ha-080000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test_ha-080000-m02_ha-080000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m02:/home/docker/cp-test.txt ha-080000-m03:/home/docker/cp-test_ha-080000-m02_ha-080000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test_ha-080000-m02_ha-080000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m02:/home/docker/cp-test.txt ha-080000-m04:/home/docker/cp-test_ha-080000-m02_ha-080000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test_ha-080000-m02_ha-080000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp testdata/cp-test.txt ha-080000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1413741474/001/cp-test_ha-080000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m03:/home/docker/cp-test.txt ha-080000:/home/docker/cp-test_ha-080000-m03_ha-080000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test_ha-080000-m03_ha-080000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m03:/home/docker/cp-test.txt ha-080000-m02:/home/docker/cp-test_ha-080000-m03_ha-080000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test_ha-080000-m03_ha-080000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m03:/home/docker/cp-test.txt ha-080000-m04:/home/docker/cp-test_ha-080000-m03_ha-080000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test_ha-080000-m03_ha-080000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp testdata/cp-test.txt ha-080000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1413741474/001/cp-test_ha-080000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m04:/home/docker/cp-test.txt ha-080000:/home/docker/cp-test_ha-080000-m04_ha-080000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000 "sudo cat /home/docker/cp-test_ha-080000-m04_ha-080000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m04:/home/docker/cp-test.txt ha-080000-m02:/home/docker/cp-test_ha-080000-m04_ha-080000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m02 "sudo cat /home/docker/cp-test_ha-080000-m04_ha-080000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 cp ha-080000-m04:/home/docker/cp-test.txt ha-080000-m03:/home/docker/cp-test_ha-080000-m04_ha-080000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-080000 ssh -n ha-080000-m03 "sudo cat /home/docker/cp-test_ha-080000-m04_ha-080000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (27.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0910 10:57:48.390128    1795 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19598-1276/.minikube/profiles/functional-475000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (27.731920208s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (27.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-966000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-966000 --output=json --user=testUser: (3.379609959s)
--- PASS: TestJSONOutput/stop/Command (3.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-924000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-924000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.794834ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3340d385-50d0-4ea9-9a80-def713154948","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-924000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92e517dd-185b-4676-a4d4-7aeb43cb919a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"637c6a56-337b-48a5-b074-883f7f2468d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig"}}
	{"specversion":"1.0","id":"50275ec7-e174-469f-83e5-b42a5f3aa891","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"92e9eb24-eb3a-4c43-877f-c90040600a9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c61afb2e-a7d4-4120-9115-799311c2c58b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube"}}
	{"specversion":"1.0","id":"e80ac57b-83e4-4130-b8a8-cddcb699306d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e2976bfa-a05d-45b0-9dc2-ec43613b8c5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-924000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-924000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-606000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.704584ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-606000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19598-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19598-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-606000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-606000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.556791ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-606000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.568806208s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.696087s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-606000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-606000: (3.504772208s)
--- PASS: TestNoKubernetes/serial/Stop (3.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-606000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-606000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.701917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-606000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-163000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-497000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-497000 --alsologtostderr -v=3: (3.392514916s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-497000 -n old-k8s-version-497000: exit status 7 (35.584333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-497000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-738000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-738000 --alsologtostderr -v=3: (3.5492835s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-738000 -n no-preload-738000: exit status 7 (53.499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-738000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-155000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-155000 --alsologtostderr -v=3: (3.926047333s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-155000 -n embed-certs-155000: exit status 7 (71.801541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-155000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-258000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-258000 --alsologtostderr -v=3: (3.391423458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-258000 -n default-k8s-diff-port-258000: exit status 7 (64.708375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-258000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-194000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-194000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-194000 --alsologtostderr -v=3: (3.431093125s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-194000 -n newest-cni-194000: exit status 7 (64.922042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-194000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-425000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                
----------------------- debugLogs end: cilium-425000 [took: 2.345322709s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-425000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-425000
--- SKIP: TestNetworkPlugins/group/cilium (2.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-357000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-357000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard